Abstracts

Day 1

Abstracts 1

Chair: Sharm Thuraisingam

Performance of interim analyses in a two-by-two factorial design with a time-to-event outcome: a simulation study of the VAPOR-C trial

Anurika De Silva, The University of Melbourne.

Accumulating data during a trial may be monitored for stopping the trial for early superiority or futility, guided by statistical stopping rules that maintain the overall Type I and Type II error. There is limited guidance on this topic for factorial trial designs. The Volatile Anaesthesia and Perioperative Outcomes Related to Cancer (VAPOR-C) trial is a two-by-two factorial randomised trial with a time-to-event outcome, which will conduct an interim analysis at 50% of the accumulated data using pre-defined stopping rules for early superiority. To examine the performance of conventional stopping rules, in a two-by-two factorial design with a survival outcome and a single interim analysis at 50% of the planned events, we conducted a simulation study, under no, synergistic, or antagonistic interaction. If we observe a statistically significant interaction at interim, data will be analysed as multi-arm. If any of the treatment effects are statistically significant, the active arm will be dropped, followed by a final analysis as multi-arm, otherwise all active arms will proceed at interim and an interaction test will be performed at final. If a statistically significant interaction is not observed at interim, data will be analysed as factorial. We drop treatments based on the statistical significance of the marginal treatment effects, followed by a final two-arm analysis, otherwise all active arms will proceed at interim and an interaction test will be performed at final. In this study, we investigate the probability of a statistically significant interaction when testing for interaction between the main treatment effects at interim and final. Following the interaction test, we investigate the probability that the trial is analysed as factorial only, multi-arm only, or a combination of factorial and multi-arm across interim and final analyses, and the Type I error rates and Powers when testing for treatment effects using the stopping rules.

Days alive and at home: overcoming statistical challenges in analysis

Vanessa Pac Soo, The University of Melbourne.

Days alive and at home (DAH) is a patient-centred measure recommended as a key outcome to inform patients and physicians better when planning surgery. This measure is also used as an outcome in observational studies and clinical trials and incorporates length of stay, re-admission, post-acute hospital discharge, and early deaths after surgery into a single outcome metric. DAH is recorded as zero if a patient died or was still in hospital within the follow-up period after surgery. The distribution of DAH is generally left-skewed with a spike at zero, creating challenges for the analysis as many statistical methods require a normal distribution, and commonly applied transformations are not useful to handle a bimodal distribution. The IDOCS study is an observational study with the hypothesis that iron deficiency was associated with worse DAH within 30 days of surgery than an iron replete state in adults without anaemia undergoing cardiac surgery. The presentation will illustrate the analysis approach of DAH within 30 days using the IDOCS study.

In the IDOCS study, the sample size was simulation-based as conventional sample size calculation methods were not appropriate. Both the iron deficient and iron replete groups had a left-skewed distribution, with a spike at zero. Due to its asymmetrical distribution, it was more relevant to model the median than the mean as used in linear regression. Moreover, the underlying conditions of linear regression were not met. In IDOCS, DAH within 30 days of surgery was analysed using quantile regression, with 1,000 bootstrap replications, to obtain the difference between iron deficient and iron replete participants in the 25th, 50th and 75th quantiles.

Quantile regression is useful in understanding outcomes that are not normally distributed and allows estimation of the association between an exposure or intervention on different quantiles of DAH to better understand the effect.

Combining multiple imputation and inverse probability weighting to address intermittent missing data in longitudinal studies with missing outcome data

Melissa Middleton, Murdoch Children’s Research Institute.

Missing data problems are ubiquitous in longitudinal studies, with dropout potentially resulting in large blocks of missing outcome data. Omitting incomplete records from analyses, due to dropout and any sporadic missing data, may result in biased estimates. Two approaches for handling missing data, multiple imputation (MI) and inverse probability weighting (IPW), are commonly used, with the latter more applicable for block missingness. However, MI and IPW alone may result in biased estimates when addressing both sporadic and block missingness together. Alternatively, the approaches may be used in combination, but the relative performance of MI/IPW, compared to MI and IPW alone, is currently unknown in the context of a longitudinal studies with dropout.

We therefore conducted a simulation study to assess the performance of two MI/IPW approaches, MI-alone, IPW-alone, and a complete-case analysis, in this setting. We considered a range of realistic scenarios, varying the amount of missing data, estimand of interest, the missing data mechanism, and the sample size, and illustrated these approaches in a case study using data from the Barwon Infant Study.

Results show that MI/IPW may be suitable for addressing sporadic missingness and large proportions of missing outcomes, when there are low levels of missing data or the outcome is a binary measure, provided the weights and all two-way interactions with analysis variables are included in the imputation model. However, both MI/IPW approaches showed bias in either the point estimate or the standard error in scenarios with a continuous outcome and high levels of missing data. IPW-alone showed poor performance in small samples, while MI-alone was approximately unbiased for the point estimate, with nominal coverage, across most scenarios. Overall, these results suggest that MI-alone may be the preferred analytical approach when there are large amounts of missing outcome data and sporadically missing covariates in longitudinal studies.

Distributional regression modelling\(\dots\) a new way of looking at data

Fernando Marmolejo-Ramos, University of South Australia.

It is common to see data across different fields being analysed via t-tests, ANOVA and linear regression models. It’s less common to see non-parametric or robust statistical approaches. However, all these approaches tend to focus on the mean, the median or other location parameters in the data. Also, they embrace the normal distribution as the model for the dependent variable. Techniques within the family of ‘distributional’ methods provide a new alternative to looking at data by encouraging the research to look at the effect of covariates on all parameters of the dependent variable’s distribution. This talk has the goal of providing some ‘food for thought’ in relation to these new techniques.

Abstracts 2

Chair: Rex Parsons

An Introduction to Probabilistic Programming in Python using PyMC

The Danh Phan, Monash University.

This talk presents to new audiences how to do probabilistic programming in Python. It aims to get users quickly up and perform Bayesian analysis using PyMC, a user-friendly and popular open-source probabilistic programming framework written in Python. Several examples are used to illustrate different features in Bayesian modeling, especially on distributions and Markov Chain Monte Carlo (MCMC) sampling methods. This presentation will allow users to leverage Bayesian methods to analyze their own data effectively in their academic research and industrial work.

clusterBMA: Combine insights from multiple clustering algorithms with Bayesian model averaging

Owen Forbes, Queensland University of Technology.

Clustering is one of the most common tasks for applied statisticians across a wide variety of industry, government and research settings. When an analyst reports results from one ‘best’ model out of several candidate clustering models, this ignores the uncertainty that arises from model selection, and results in inferences that are sensitive to the particular model and parameters chosen.

In this work we introduce clusterBMA, extending Bayesian Model Averaging (BMA) methodology to combine inference across multiple algorithms for unsupervised clustering of a given dataset, using a combination of clustering internal validation criteria to weight results from each model. BMA offers some attractive benefits over other existing approaches. Benefits include intuitive probabilistic interpretation of an overall cluster structure integrated across multiple sets of clustering results, flexibility to accommodate various input algorithms, and quantification of model-based uncertainty. These features enable improved communication of uncertainty and variability across models for clustering applications, providing clearer understanding of the insights offered by different clustering methods and uncertainty across models.

We present results from a simulation study to explore the utility of this technique for identifying robust integrated clusters with model-based uncertainty, under varying conditions of separation between simulated clusters. We then implement this method in a substantive real world case study, clustering young people based on electrical brain activity and relating these clusters to measures of mental health and cognitive function. Our method offers extra insight compared to clustering results from individual algorithms, particularly regarding consistency or ambiguity in cluster allocations between multiple clustering algorithms.

The method is implemented in the freely available R package clusterBMA, and this session will include a practical demonstration to facilitate understanding of how this tool may be useful for audience members in their work.

What was old is young again: A statistical career ageing in reverse

Kylie Lange, The University of Adelaide.

I graduated from my undergraduate degree as a ‘Young Statistician’ over 20 years ago. I have had a career as a statistician supporting university research in things from clinical trials to questionnaire design, and official statistics to wildlife surveys. Now, 23 years later I am pursuing a PhD and I am once again a ‘Young’ Student Statistician.

In this presentation I will discuss how my attitude towards undertaking postgraduate study has changed over the years - from ‘I will never’, to ‘well, maybe’, to ‘sign me up’. I will cover some of the lessons that I learnt along the way that helped me build a rewarding career without postgraduate qualifications, what I hope I will gain from the experience of completing a PhD, and why not everyone has to build a statistical career in the same way.

Day 2

Abstracts 3

Chair: Lisa Thomasen

Design Assurance Test in the Dress Rehearsal for the Census of Aotearoa New Zealand

Betsy Williams, Statistics New Zealand.

Statistics New Zealand wanted to test and optimise its design for collecting responses for the 2023 Census. Through the Design Assurance Test in March 2022, carried out in the North Island city of Tauranga, we planned and analysed a test that answered many questions. What is the impact of a reminder before a due date versus after? How do you pivot when your plans change (thanks, COVID!), but protect the integrity of your controlled trial? Which statistical model of a three-armed, cluster-randomised trial will fit the data properly? In the 2022 Dress Rehearsal for the Census, we designed, planned, prepared for, pivoted with, and finally analysed a test of the collection procedures. We considered Generalised Estimating Equations (GEE), a Generalised Linear Mixed Model (GLMM), and an area-level Generalised Linear Model (GLM). After considering why the different estimating strategies might disagree, we pre-specified what we would do in that case, though as it turned out, there were no contradictions between model outcomes.

Using a binomial logistic GLMM, we found the following. The presence of staff following up non-responding dwellings boosted dwelling-level response rates (effect size = 6.7 percentage points (p < 0.001), even though these staff could not make direct contact due to COVID presence in the community. Sending a reminder before the mock Census Day, instead of after, had no effect on final dwelling-level response, 58 days after Census Day (effect size = -0.8 percentage points, p = 0.59). However, an exploratory analysis found the pre-reminder had a significant effect on response by two days after Census Day (effect size = 6.5 percentage points, p > 0.001).

Record Linking in Official Statistics

Sam Cleland, Statistics New Zealand.

How do you link data sets that don’t have a common unique identifier? This presentation will focus on the high-level methodology used by Statistics NZ to link census responses to administrative records in the integrated data infrastructure (IDI), with a focus on the Fellegi-Sunter method.

During the New Zealand 2018 census unexpectedly low response would have resulted in final data outputs that may not be fit for purpose. Fortunately, StatsNZ already had the integrated data infrastructure (IDI), a large research database that holds deidentified microdata about people and households in New Zealand. Researchers use the IDI to conduct cross-sector research that provides insight into our society and economy.

One issue of using IDI data within the census is the lack of a common unique identifier. The solution is to use the Felligi-Sunter method to probabilistically link records in each of the datasets. The Felligi-Sunter method is widely used in many industries and will be detailed in the presentation. The links between these datasets allows the inclusion of administrative data in final census outputs. The ability to link these datasets means that the 2023 census will include IDI administrative data by design.

But people don’t provide administrative data for census purposes? The presentation will also briefly cover some of the legal, ethical, and social implications of linking administrative data for use in census outputs. Administrative data is of particular interest in official statistics now due to the international trend of falling response rates in official surveys.

Identification and Prediction of Accident Prone Roads and Blackspots in Victoria, Australia, Using Geographic Information Systems and Machine Learning

Mahamendige Asel Mendis, Swinburne University of Technology.

Background. Road accidents are the leading cause of death among people aged 5-29 years. Road accidents claim the lives of over a million people every year and they cause 20-50 million life altering injuries. Blackspots are defined as a high concentration of road accidents in a specific area. Although the research related to blackspots has been ongoing for decades, there is no consistent definition of blackspots due to the geographic, demographic and infrastructural differences across the world. Researchers have been studying particularly the factors contributing to blackspots. Geographic information systems (GIS) have played a key role in analyzing these hotspots. There remain many unanswered questions regarding blackspots and the available methods to properly identify them. Several past studies have found that the location of road accident concentrations is not random and that there are many factors such as the road geometry traffic volume etc. that need to be taken into consideration before classifying an area as a blackspot.

Methods. This research uses spatial clustering, statistical modelling, and machine learning to identify blackspots in Victoria using spatial clustering methods. GIS will allow us to collect, analyze, visualize, and aggregate the data at the required geographic level. The data required for the analysis are a mixture of point and polygon data which will require GIS to pre-process it accordingly

Anticipated Outcomes. The aim of the research is to build a comprehensive prediction model for blackspots using spatial data from various domains such as demographics, transport, points of interest etc, considering the unique characteristics of the location of each blackspot. This research will allow policy makers to make decisions on infrastructure investment to better protect the community and allow healthcare stakeholders to target areas of concern and be more proactive in response to accidents.

Abstracts 4

Chair: Luca Maestrini

Auditing Ranked Voting Elections with Dirichlet-Tree Models

Floyd Everest, The University of Melbourne.

Ranked voting systems, such as instant-runoff voting (IRV) and single transferable vote (STV), are used in many places around the world. They are more complex than plurality and scoring rules, presenting a challenge for auditing their outcomes: there is no known risk-limiting audit (RLA) method for STV other than a full hand count. We present a new approach to auditing ranked systems that uses a statistical model, a Dirichlet-tree, that can cope with high-dimensional parameters in a computationally efficient manner. We demonstrate this approach with a ballot-polling Bayesian audit for IRV elections. Although the technique is not known to be risk-limiting, we suggest some strategies that might allow it to be calibrated to limit risk.

Multivariate strong invariance principle for Markov chain Monte Carlo

Arka Banerjee, Indian Institute of Technology, Kanpur.

Strong invariance principle (SIP) for the partial sums of some mean-corrected random sequences holds if it can be approximated to a scaled Brownian motion in an almost everywhere sense. Over the years SIP rates have consistently improved for various dependent and independent structures of the random sequences. In this paper, we provide relatively tight SIP rates for polynomially and geometrically ergodic Markov chains. We use wide-sense regenerative structures of Harris ergodic Markov chains, bypassing the need to assume a one-step minorization. A key feature of the rates is that they are completely specified. This allows users of Markov chain Monte Carlo to verify key assumptions employed in output analysis.

Stochastic variational inference for heteroskedastic time series models

Hanwen Xuan, The University of New South Wales.

We derive stochastic variational inference algorithms for fitting various heteroskedastic time series models using Gaussian approximating densities. Gaussian, t and skew-t response GARCH models are examined. We implement an efficient stochastic gradient ascent approach based upon the use of control variates or the reparameterization trick and show that the proposed approach offers a fast and accurate alternative to Markov chain Monte Carlo sampling. We also present a sequential updating implementation of our variational algorithms which is suitable for the construction of an efficient portfolio optimization strategy.

Subbagging Variable Selection for Massive Data

Xian Li, The Australian National University.

Massive datasets usually possess the features of large \(N\) (number of observations) and large \(p\) (number of variables). In this work, we propose a subbagging variable selection approach to select relevant variables from massive datasets. Subbagging (subsample aggregating) is an aggregation approach originally from the machine learning literature, which is well suited to the recent trends of massive data analysis and parallel computing. Specifically, we propose a subbagging loss function based on a collection of subsample estimators, which uses a quadratic form to approximate the full sample loss function. The shrinkage estimation and variable selection can be further conducted based on this subbagging loss function. We then theoretically establish the root \(N\)-consistency and selection consistency for this approach. It is also proved that the resulting estimator possesses the oracle property. However, variance inflation is found in its asymptotic variance compared to the full sample estimator. A modified BIC-type criterion is further developed specifically to tune the hyperparameter in this method. An extensive numerical study is presented to illustrate the finite sample performance and computational efficiency.

Abstracts 5

Chair: Kevin Wang

Investigating the Associations between Vitamin D Deficiency in Infancy and Early Childhood Food Allergy Outcomes, Using the HealthNuts Population-Based Study

Kevin Wong, The University of Melbourne.

Background. Vitamin D deficiency (VDD) is a risk factor for the development of food allergy, however it remains unknown whether it plays a role in the persistence of food allergy. We investigated whether VDD at age 12 months impacted the persistence of egg or peanut allergy by age 6 years.

Methods. Data from the HealthNuts study of 5276 children were used. Food allergies were diagnosed via oral food challenge following a positive skin prick test (≥1mm). Children with food allergy at age 1 were offered repeat diagnoses at ages 4 and 6 years. Vitamin D levels were obtained by measuring 25(OH)D3 levels from blood collected at age 12 months. Multivariable logistic regression, adjusted for potential confounders, estimated the association between VDD at age 1, and the persistence of egg (na=a316) and peanut (na=a108) allergies. Sensitivity analyses explored the inclusion of additional covariates for adjustment; consideration of 25(OH)D3 levels on the continuous scale as a linear exposure; and aggregation of 25(OH)D3 into quintiles, using the middle level as reference, to explore potential non-linear associations.

Results. There was little evidence of an association between VDD (≤50nmol/L) at age 1 year and the odds of persistent egg allergy (aOR, 0.831; 95% CI: 0.37, 1.75) or peanut allergy (aOR, 1.84; 95% CI: 0.50, 6.75); results were imprecise. Further adjustment for covariates did not alter findings. Insufficient evidence was found for a relationship between 25(OH)D3 as a continuous exposure and persistent peanut allergy (aOR, 1.00; 95% CI: 0.98, 1.02), but some for egg allergy (aOR, 1.01; 95% CI: 1.00, 1.03). When compared to children in the middle quintile (59.6-73.3 nmol/L), those in the highest quintile had an increased risk of persistent egg allergy (aOR 3.70, 95% CI 1.19, 11.52).

Discussion. There was limited evidence that Vitamin D deficiency at age 12 months impacted the persistence of food allergy.

Socioeconomic position, inflammation, metabolomic profile and cardiometabolic risk in early to mid-childhood: analysis of two Australian cohort studies

Peixuan Li, The University of Melbourne.

Background. Lower socioeconomic position (SEP) is associated with inflammation and an unfavourable metabolomic profile, which both predict cardiovascular disease (CVD) risk in adults. It is unclear when these relationships become evident earlier in the life course. We investigated the associations between SEP, inflammation, metabolomic profile, and cardiovascular phenotypes in early and mid-childhood.

Methods. Data were available from Australian children recruited to the Barwon Infant Study (BIS, n= 511) and Child Health CheckPoint study (CHCP, n=1874). In both cohorts, neighbourhood and household SEP were measured at birth. Metabolomic profile, inflammatory biomarkers (high sensitivity C-reactive protein, hsCRP, and glycoprotein acetyls, GlycA) and preclinical cardiovascular phenotypes (mean arterial pressure, MAP, carotid intima-media thickness, CIMT, and pulse wave velocity, PWV) were measured at age 4 years in BIS and 11–12 years in CHCP. Simple linear regression was used to describe the association between SEP and metabolomic measures. The effects of metabolomic measures on each preclinical cardiovascular phenotype were estimated using regression models to adjust for age, sex, and body mass index (BMI).

Results. At 4 years there was limited evidence of an association between lower SEP and an adverse metabolomic profile (e.g. lower amino acids, lower DHA and Omega-3 fatty acids), and a higher level of inflammation, which was more evident at 11-12 years. A higher level of Inflammation was associated with higher MAP at both ages, and there were additional associations between specific metabolites (e.g. higher citrate and creatinine) and increased CIMT at 11-12 years. No associations were observed between inflammation, metabolomic profile and PWV at these ages.

Discussion. Socioeconomic disadvantage was associated with inflammation and metabolomic profiles, and inflammation was associated with adverse cardiovascular phenotypes in childhood. Understanding the causal drivers of these relationships may inform intervention opportunities in childhood to reduce CVD risk.

An estimate of the effect of protein LRP8 rs5174 allele on cognition across time in adults with Alzheimer’s Disease

Pawel Kalinowski, The Florey Institute of Neuroscience and Mental Health.

Background. We target the role of the rs5174 minor allele on the progression of cognitive decline in a sample of participants without dementia who are pathology positive for Alzheimer’s disease. We hypothesize that this allele increases protection from programmed cell death causing a reduction of cognitive decline over time. We aim to examine if there is such a reduction in cognitive decline, and the mode of inheritance (MOI), if the effect is present.

Methods. Using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), we aim to estimate the effect of LRP8 rs5174 on cognition across time, as measured by score on the CDR sum of boxes (CDRSB), using linear mixed modelling and bootstrapping. We investigate which genotype and MOI is better at expressing a reduction of cognitive decline. We replicate the effect in the Australian Imaging, Biomarker & Lifestyle (AIBL) and Swedish BioFINDER Cohorts. Finally, we performed a meta-analysis to estimate the effect of rs5174 minor allele across the three studies.

Results. The results provide evidence of a protective effect of rs5174 allele on the decline of cognition. Our results also support that both an additive MOI (β=-0.092 , 95%CI = [-0.146, -0.035], p =.001), and the recessive MOI (β=-0.246 95%CI = [-0.348, -0.146], p <.001) is present and protective against cognitive decline. These findings were replicated in both BioFINDER and AIBL cohorts. A meta-analysis of these results suggests a protective effect of the recessive MOI of β=-0.227 95% CI [-0.307,-0.146].

Discussion. There is evidence of a protective effect of rs5174 on the decline of cognition. Evidence is stronger for the recessive MOI and the effect size is substantial when considered over time (a reduction of around .25 of a point on CDRSB per year).

An exclusivity index based on area under curve

Michael Walker, The University of New South Wales.

Asthma is a highly prevalent and often serious condition causing significant illness and sometimes death. It typically consumes between 1-3% of the medical budget in most countries and imposes a disease burden on society comparable to schizophrenia or cirrhosis of the liver. Its causes are as yet unknown but a significant number of risk factors have been identified.

While classifiers have long been used for prognosis and diagnosis, we use them to identify useful asthma subtypes, called endotypes. Different endotypes often require different treatments and management programs, and driven by different biological factors. These different factors provide different predictors, and a predictor which separates one endotype from the healthy may not do so for a different endotype. We use this to construct an indicator of when a given predictor is exclusively predictive of a given endotype from the mathematical properties of the area under curve. Our so-called ‘exclusivity index’ is quantitatively precise, unlike a significance threshold.

We use data from the Cohort Asthma Study, a world-class data resource for which asthma-related data was taken about such diverse factors as exposure to cigarette smoke, numbers and types of early respiratory infections, blood antibody titres, mode of birth and number of siblings from children with family histories of asthma or allergic atopy.

This work was first presented by the author in a thesis submitted to the University of Melbourne for the degree of Master of Philosophy.

Abstracts 6

Chair: Shih Ching Fu

Understanding the Statistical Needs of High Performance Sport

John Warmenhoven, The University of Canberra.

Sport has historically had an unusual relationship with statistics, with difficulty sometimes arising due to the presence of unusual modifications to conventional statistical approaches being made to align with the constraints of sport. To better understand why this happens, the AIS Statistics in High Performance Sport project was formed in 2019 to improve the inter-disciplinary connections between statistics and sport disciplines, relative to identifying environmental factors and constraints of sports practice and research for analyzing data in practice and research contexts. A number of focus areas for sport to connect with the statistics community were identified, inclusive of the need to better analyze individual athletes, understanding how to define and assess “meaningful change,” and also better leverage exploratory data methods and the role they play in effective data communication within practical sport environments. This project also explored potential practical mechanisms for bridging the current knowledge gap between sport and statistics, via mechanisms such as “communities of practice.” This narrative presentation will also highlight a current follow-on project from this survey, focused on the development and construction of a digital community of practice, a “Sports Data Kiosk,” for sport practitioners to connect with the statistics community and undertake capability uplift in alignment with the needs of analyzing sport data, taken from the themes of the survey.

How does exercise, with and without diet, improve physical function in adults with knee osteoarthritis? A secondary analysis of a randomised controlled trial exploring potential mediators of effects.

Fiona McManus, The University of Melbourne.

Background. Explore mediators of effects of two 6-month telehealth-delivered exercise programs, with and without weight loss diet, on function improvements in knee osteoarthritis.

Methods. Secondary analysis of 345 participants from a three-arm randomised controlled trial of exercise (Exercise) and exercise plus diet (Diet+Exercise) versus information (Control). The outcome was change in function (WOMAC, 0-68) at 12-months. Potential mediators were change at 6 months in i) attitudes towards self-management, ii) fear of movement, iii) arthritis self-efficacy, iv) weight, and v) physical activity, and, at 6-months, vi) willingness for knee surgery. For Diet+Exercise vs Exercise, only change in weight was evaluated. Full causal mediation analyses were conducted where two regression models were simultaneously fitted for the outcome, considering each potential mediator and each relevant treatment group comparison separately. Using the ‘paramed’ function in Stata, indirect (mediated) effects were estimated.

Results. Mediators of Exercise vs Control were reduced fear of movement (accounting for -1.11 [95% confidence interval: -2.15, -0.07] units improvement in function) and increased arthritis self-efficacy (-1.66 [-3.04, -0.28] improvement in function). Mediators of Diet+Exercise vs Control included reduced fear of movement (-1.13 [-2.17, -0.08] improvement in function), increased arthritis self-efficacy (-5.15 [-7.34, -2.96] improvement in function), and weight loss (-5.79 [-7.96, -3.63] improvement in function). Weight loss mediated the effect of Diet+Exercise vs Exercise on function (-4.02 [-5.77, -2.26] improvement in function).

Discussion. Increased arthritis self-efficacy, reduced fear of movement, and weight loss partially mediated effects of telehealth-delivered exercise programs, with and without diet, on function in knee osteoarthritis. Weight loss partially mediated the effect of diet plus exercise on function, compared to exercise alone.

Parasite clearance estimation for knowlesi malaria

Jeyamalar T Thurai Rathnam, The University of Melbourne.

Background. In recent years, Southeast Asia has seen a rise in zoonotic Plasmodium knowlesi infections in humans, leading to clinical studies to determine and monitor the efficacy of antimalarial drugs. One of the key outcome measures for antimalarial drug efficacy is parasite clearance. This study aims to compare the Worldwide Antimalarial Resistance Network (WWARN) Parasite Clearance Estimator (PCE) standard two-stage approach, commonly used for P. falciparum, and the Bayesian hierarchical modelling approach to estimate parasite clearance rates.

Methodology. Longitudinal parasite clearance data from 714 patients infected with P. knowlesi and enrolled in three clinical trials conducted in Sabah, Malaysia were analysed. The parasite clearance rates were estimated using the first stage of the standard two-stage approach which analyses each patient profile individually and the Bayesian hierarchical framework, which incorporates all profiles simultaneously. Both methods use a model that incorporates a lag, slope, and tail phase for the parasite clearance profile.

Results. The standard two stage approach estimated the clearance rates for 678 (95%) patients. The Bayesian method, which included profiles from all 714 patients, estimated a faster population mean parasite clearance estimate (0.36 /hr versus 0.26 /hr) and visually, better model fits were observed. The artemisinin-based combination therapies were more effective in treating P. knowlesi compared to chloroquine, as shown by the estimated parasite clearance half-lives of 2.5 and 3.6 hours respectively using the standard two-stage method, and 1.8 and 2.9 hours respectively using the Bayesian method.

Conclusion. It is recommended that the standard two-stage approach is used for data with frequent parasite measurements as it is straightforward to implement. If the data obtained has fewer parasite measurements available per patient, then Bayesian hierarchical method should be implemented to avoid potential selection bias.

Investigating the influence of haemoglobin in simulated severe malaria populations to explore optimal dosing schemes

Phoebe Fitzpatrick, The University of Melbourne.

Background. For treatment of severe malaria, the World Health Organization (WHO) has recommended artesunate as the first line treatment: administered as 3mg/kg for children less than 20kg and 2.4 mg/kg otherwise. However, in 2021 the American Food and Drug Administration (FDA) challenged this, instead endorsing 2.4 mg/kg regardless of weight. We believe the FDA’s recommendation could lead to underdosing of young children as their method does not consider the age dependency of haemoglobin levels in areas of high malaria transmission, nor does it use a dataset representative of the target population to model certain covariate relationships.

Methods. We simulated artesunate exposures at different body weights in a virtual population of children with severe malaria under different dosing schemes. The artesunate exposures were derived using a pharmacokinetic model derived by Zaloumis et al from 223 patients receiving artesunate intravenously which found the covariates weight, haemoglobin and body temperature all significantly impacted artesunate pharmacokinetics. The virtual population weight and haemoglobin simulated using the LMS method based on data from the Severe Malaria in Africa Children Network; and body temperature simulated from a normal distribution with mean and standard deviation based on the same children.

Results. Simulated exposures decreased in children weighing from 8 kg to 18 kg compared to older children and very young children (<7 kg). Since most children who acquire severe malaria weigh between approximately 6 kg and 20 kg, most children dosed with the standard 2.4 mg/kg of artesunate don’t reach the same drug exposures as older children. The adjusted weight-based dose regimen as recommended by the WHO lifted the exposures of children weighing less than 20kg, most children affected by severe malaria, to be comparable or greater than children over 20kg.

Conclusion. The FDA should reconsider their recommendation in favour of WHO’s to avoid underdosing of young children.

Abstracts 7

Chair: Melissa Middleton

Ising Similarity Regression Models

Zhi Yang Tho, The Australian National University.

Understanding the dependence structure between response variables is an important component in the analysis of multivariate data. This article focuses on modeling the dependence in multivariate binary data, motivated by a real application aiming to understand how the dependence between different U.S. senators’ votes are determined by their similarities in attributes, e.g., political parties and social network profiles. To address such a research question, we propose a novel Ising similarity regression model which regresses the pairwise interaction coefficients in the Ising model against a set of similarity measures available/constructed from covariates. The proposed model provides a clear and explicit quantification of how different similarity measures affect the dependence between binary responses, which can be useful in various fields of interest, such as ecology and finance. Model selection approaches are further developed through regularizing the pseudo-likelihood function with an adaptive lasso penalty, in order to enable the selection of relevant similarity measures. We subsequently establish the estimation and selection consistency of the proposed regularized estimator under a general setting, where the number of similarity measures is allowed to grow with the sample size and the dimension of the response vector. Simulation studies demonstrate the strong performance of parameter estimation and similarity selection. Via applying the proposed Ising similarity regression model to a dataset of roll call voting records of 100 U.S. senators from the 117-th Congress, we obtain a series of new insights, including but not limited to how the similarities in senators’ parties and social network profiles drive their voting associations.

Increasing cluster size asymptotics for nested error regression models

Ziyang Lyu, The University of New South Wales.

We derive asymptotic results for the maximum likelihood and restricted maximum likelihood (REML) estimators of the parameters in the nested error regression model when both the number of independent clusters and the cluster sizes (the number of observations in each cluster) go to infinity. A set of conditions is given under which the estimators are shown to be asymptotically normal. There are no restrictions on the rate at which the cluster size tends to infinity. We also deal with the asymptotic distributions of the estimated best linear unbiased predictors (EBLUP) of the random effects, with ML/REML, estimated variance components, converge to the true distributions of the corresponding random effects, when both of the numbers of independent clusters and the cluster sizes (the number of observations in each cluster) go to infinity.

On estimation of parameters of two dimensional chirp model with the product term in phase

Abhinek Shukla, Indian Institute of Technology, Kanpur.

We addresses the problem of parameter estimation of a real-valued multi-component two dimensional (2D) chirp signal model, contaminated with linear stationary errors. This model can be used to describe signals having constant amplitude with frequency to be a linear function of spatial co-ordinates. The product term in phase of such chirp models, is an important characteristic of numerous measurement interferometric signals or radar signal returns. The parameter estimation problem for chirp model is encountered in many real-life applications such as 2D-homomorphic signal processing, magnetic resonance imaging (MRI), optical imaging, interferometric synthetic aperture radar (INSAR), modeling non-homogeneous patterns in the texture image captured by a camera due to perspective or orientation. 2D chirps have been used as a spreading function in digital watermarking which is also helpful in data security, medical safety, fingerprinting, and observing content manipulations.

In recent times, several methods have been proposed for parameter estimation of these models. These methods however are either statistically sub-optimal and suffering with high signal-to-noise ratio (SNR) threshold, or computationally burdensome (e.g., least squares estimators (LSEs)).

We will discuss the state of the art and then the proposed computationally efficient method for estimation. The proposed algorithm is motivated by decomposing 2D chirp model into a number of 1D chirp models. The key attributes of the proposed method are that it is computationally faster than the conventional optimal methods such as LSEs, and at the same time, having desirable statistical properties such as, attaining same rates of convergence as the optimal LSEs. In fact, we will discuss some theoretical results, like, the proposed estimators of the chirp rate parameters have the same asymptotic variance as that of the traditional LSEs, and also some simulation results comparing performances of different estimators.

Joint Model for Longitudinal and Multi-state Responses with Functional Predictors

Rianti Utami, The University of New South Wales.

Joint modeling is increasingly used to analyse the association between longitudinal and time-to-event data. This approach can be applied when the event process for a subject may be affected by a covariate measured longitudinally on the same subject. This leads to reverse dependency because the existence of the covariate depends on the subject’s survival. In the joint model, both the longitudinal and the time-to-event data are considered as responses that are modelled in separate submodels and then linked together through an association parameter. A recent development on joint models includes multi-state processes which allows for analysing multiple events. Another recent development is taking into account the influence of functional data in both submodels. Here we combine these two developments to create a functional joint multi-state model. We demonstrate the empirical performance of our approach through a simulation study. Using this model it becomes possible to analyse a complex dataset with longitudinal, multi-state, and functional components altogether.

Day 3

Abstracts 8

Chair: Alex Fowler

Switching between space and time: Spatio-temporal analysis with cubble

Sherry Zhang, Monash University.

Spatio-temporal data refer to measurements taken across space and time and a common type of it is climate variables (precipitation, maximum/ minimum temperature) measured by meteorological sensors. At one time, we would select a spatial location and inspect the temporal trend; at another time, we might select one or multiple time value(s) and explore the spatial distribution. Ideally, we could make multiple maps and multiple time series to explore these together; however, doing all of these actions is complicated when data arrive fragmented in multiple objects. In this talk, I will demonstrate a new data structure, cubble, for organising spatial-temporal data so that different types of information can be easily accessed for exploratory data analysis and visualisation.

Bayesian Geoadditive modelling and predictors of unsuppressed HIV viral load among HIV- positive men and women in an hyperendemic area of KwaZulu-Natal, South Africa

Adenike Soogun, University of KwaZulu-Natal, Centre for the AIDS Programme of Research in South Africa (CAPRISA).

Unsuppressed HIV viral load is an important marker of sustained HIV transmission. We investigated the prevalence, predictors, and high-risk areas of unsuppressed HIV viral load among HIV-positive men and women. Unsuppressed HIV viral load was defined as a viral load of ≥400 copies/mL. Data from the HIV Incidence District Surveillance System (HIPSS), a longitudinal study undertaken between June 2014 to June 2016 among men and women aged 15–49 years in rural and peri-urban KwaZulu-Natal, South Africa, were analyzed. A Bayesian Geoadditive regression model which includes a spatial effect for a small enumeration area was applied using an integrated nested Laplace approximation (INLA) function while accounting for unobserved factors, non-linear effects of selected continuous variables, and spatial autocorrelation. The prevalence of unsuppressed HIV viral load was 46.1% [95% CI: 44.3–47.8]. Predictors of unsuppressed HIV viral load were incomplete high school education, being away from home for more than a month, alcohol consumption, no prior knowledge of HIV status, not ever tested for HIV, not on antiretroviral therapy (ART), on tuberculosis (TB) medication, having two or more sexual partners in the last 12 months, and having a CD4 cell count of <350 cells/μL. A positive non-linear effect of age, household size, and the number of lifetime HIV tests were identified. The higher-risk pattern of unsuppressed HIV viral load occurred in the northwest and northeast of the study area. Identifying predictors of unsuppressed viral load in a localized geographic area and information from spatial risk maps are important for targeted prevention and treatment programs to reduce the transmission of HIV.

Building a Prediction Model for a Health Utility Score via a Visual Function Instrument in an Australian Population with Inherited Retinal Disease

Ziyi Qi, Centre for Eye Research Australia.

Background. As potential gene therapy interventions for inherited retinal diseases (IRDs) are undergoing investigation, it is essential to use appropriate research methods to maximize research output with available resources. Both vision-specific patient-reported outcomes and general health utility scores are of interest when assessing the efficacy of new therapies. However, the implementation of these tools is time-consuming. This study aims to evaluate whether questions from the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25) can predict health utility EQ-5D-5L score in Australian adults with IRDs.

Method. A cross-sectional, self-reported Internet-based survey was administered in 2021. NEI-VFQ-25 driving-related items were excluded from the process due to missingness among people who did not drive for reasons unrelated to vision. Multiple variable selection methods were compared to select the NEI-VFQ-25 items that were most predictive of EQ-5D-5L score in a complete-case analysis: Least Absolute Shrinkage and Selection Operator (LASSO), backward elimination, forward selection, and stepwise selection for a linear regression model. Participants were split into training and validation sets. The goodness of fit statistics including Akaike information criterion, Bayesian information criterion and mean square were assessed among the validation set and compared to select the optimal model.

Results. From 1118 responses, 615 met eligibility criteria with complete data. The adaptive LASSO has optimal performance in variable selection. Eight variables relating to general health, general vision, mental health, ocular pain, near vision, peripheral vision, social function, role limitation, and dependency were selected in the model and observed to have optimal goodness of fit around 49.1% of the variance of the EQ-5D-5L scores explained.

Discussion. Although NEI-VFQ-25 items explain a moderate degree of variation in EQ-5D-5L scores, more accurate prediction is needed to produce meaningful outcomes in economic evaluations. Future work is necessary to assess external validity and methods for dealing with missing item values.

Degradation modelling of mining and mineral processing equipment: a comparison of two statistical approaches

Gabriel Gonzalez, Curtin University.

Introduction. In maintenance management, field measurements are a critical source of information. They diagnose asset health, plan maintenance schedules, and capital budgeting. Field measurements can, however, be noisy, and when they are used to monitor a wear process, they can, in the extreme, mask the underlying wear process. The effects of the measurement variability can lead to misleading results, with economic and safety impacts.

Aim. In this work, we investigate two statistical methods, the general path model and stochastic process models, for modelling and predicting the degradation of industrial piping. Both methods can quantify the effects of measurement variability, and considerable previous research has focused on extending these techniques into modelling complex problems. However, little attention has been paid to comparing both approaches using a realistic dataset.

Methods and Findings. This work summarises both methods’ main theoretical differences and similarities. We illustrate a statistical procedure to implement both approaches under the Bayesian statistical framework. Furthermore, we present and discuss the initial results of comparing the general path and stochastic process models to a realistic and complex industrial data set drawn from the mineral processing industry. We provide guidelines for selecting the most appropriate method and suggestions for further research and development of this class of models for mining and mineral processing problems.

Contributions. The results of this research are valuable to reliability and maintenance engineers in improving current practices for quantitative risk assessment of assets under inspection programs.

Abstracts 9

Chair: Tinula Kariyawasam

A Double Fixed Rank Kriging Approach to Spatial Regression Models with Covariate Measurement Error

Xu Ning, The Australian National University.

In many applications of spatial regression modeling, the spatially-indexed covariates are measured with error, and it is known that ignoring this measurement error can lead to attenuation of the estimated regression coefficients. Classical measurement error techniques may not be appropriate in the spatial setting, due to the lack of validation data and the presence of (residual) spatial correlation among the responses. In this article, we propose a double fixed rank kriging (FRK) approach to obtain bias-corrected estimates of and inference on coefficients in spatial regression models, where the covariates are spatially indexed and subject to measurement error. Assuming they vary smoothly in space, the proposed method first fits an FRK model regressing the covariates against spatial basis functions to obtain predictions of the error-free covariates. These are then passed into a second FRK model, where the response is regressed against the predicted covariates plus another set of spatial basis functions to account for spatial correlation. A simulation study and an application to presence-absence records of Carolina wren from the North American Breeding Bird Survey demonstrate that the proposed double FRK approach can be effective in adjusting for measurement error in spatially correlated data.

Optimal Staffing of Classrooms with Demand Forecasting

Michael Nefiodovas, The University of Western Australia.

We consider the problem of staffing university classrooms with uncertain, time-dependent demand levels. We first formulate the problem as a multi-objective minimisation problem of both average student wait times and operational costs, this contrasts with existing approaches which seek to minimise costs subject to a maximum wait-time constraint. Due to the classroom setting operating over a small time window, we are unable to apply asymptotic queuing results. We, therefore, develop a simulation-based model of a classroom to numerically estimate the wait times at a given demand and staffing level. We then present the optimal staffing problem as a sequential decision problem where each time an operator must decide the staffing level for the upcoming week. Although there is demand uncertainty each week, operators are still able to make informed forecasts based on previous weeks’ demand and a schedule of high-demand events. We then develop an algorithm based on Bellman’s principle of optimality to solve the optimal staffing level each week. Finally, we provide a comparison between an optimised and unoptimised system showing large financial and service quality improvements.

Abstracts 10

Chair: Berwin Turlach

Assessing the influence of the MAUP on Spatial models for areal disease data in Queensland, Australia

Farzana Jahan, Murdoch University.

Background. Spatially aggregated data at small area levels are popular for disease mapping to help quantify spatial variation and to identify areas with higher or lower risk. Spatial aggregation can protect the confidentiality of the data, aid in calculation of relevant rates and risks, and enables one to adjust for spatial autocorrelation. But these data are also susceptible to the modifiable areal unit problem (MAUP) that causes variations in statistical inference depending on the size of the areal units used (the scale effect) and the way the study area is divided (the zoning effect, e.g., municipalities vs. districts). The effect of MAUP on different data types and different geographies are being studied. However, there is a dearth of literature using cancer incidence data to check how MAUP impacts the inference on risk of cancer using data aggregated at different levels. The key objective of this study is to explore the impact of MAUP in the analysis of cancer incidence data. Since the geographies of Australia is unique, we believe an explicit study of spatial modelling using different official boundary levels to define small areas is important to understand how MAUP impacts disease mapping in Australia.

Methods. The very well known BYM model is fitted to lung cancer count data in different boundary levels using a suitable covariate.

Results. The impact of MAUP is assessed in the underlying spatial autocorrelation of observed data and modelled residuals as well as covariate effects.

Discussion. The difference in inference is shown using different areal boundaries in this research. The study contributes to understand how MAUP should be considered when analysing small area level cancer data in Australia and which areal units should be chosen, if many options are available.

Predicting Marimba Stickings Using Long Short-Term Neural Networks

Jet Kye Chong, The University of Western Australia.

In marimba music, ‘stickings’ are the choices of mallets used to strike each note. Stickings significantly influence both the physical facility and expressive quality of the music performance. Choosing ‘good’ stickings and evaluating one’s stickings are complex choices, often relying vaguely on trial-and-error. Machine learning (ML) approaches, particularly with advances in sequence-to-sequence techniques, have proved suited for similar complex classification problems, motivating their application in our study. We address the sticking problem by developing Long Short-Term Memory (LSTM) models to generate stickings in 4-mallet marimba music trained on exercises from Leigh Howard Stevens’ Method of Movement for Marimba. Model performance was measured under a range of metrics to account for multiple sticking possibilities, with LSTM models achieving a maximum average micro-accuracy of 97.3%. Finally, we discuss qualitative observations in sticking predictions and limitations of this study and provide direction for further development in this field.

Objectives. Restriction to the analysis of births that survive past a specified gestational age (typically 20 weeks gestation) can lead to biased exposure-outcome associations as the exposure may impact selection into the study and mask the true observation of outcomes. The objective of this simulation study was to estimate the influence of bias resulting from using a left truncated dataset to ascertain exposure-outcome associations in perinatal studies.

Approach. We simulated the magnitude of bias under a collider-stratification mechanism for the association between the exposure of advancing maternal age (≥ 35 years) and the outcome of stillbirth. This bias occurs when the cause of restriction (early pregnancy loss) is influenced by both the exposure and unmeasured factors that also affect the outcome. Simulation parameters were based on an original birth cohort from Western Australia and a range of plausible values for the prevalence of early pregnancy loss (< 20 gestational weeks), an unmeasured factor U and the odds ratios for the selection effects. Selection effects included the effects of maternal age on early pregnancy loss, U on early pregnancy loss, and U on stillbirth. We compared the simulation scenarios to the observed birth cohort that was truncated to pregnancies that survived beyond 20 gestational weeks.

Findings. We found evidence of a marginal downward bias, which was most prominent for women aged 40+ years. Our findings indicate that the stronger the effect of the unmeasured U on early pregnancy loss and stillbirth, the greater the influence of the bias. Given the large prevalence and large magnitudes of selection effects required, it is unlikely that such unmeasured confounders exist to induce such strong bias. We therefore conclude that bias due to left truncation is not sufficient to have a substantial effect on the association between maternal age and stillbirth.

Conveyor belt wear forecasting through a Bayesian Hierarchical Modeling framework using functional data analysis and gamma processes.

Ryan Leadbetter, Curtin University.

Reliability engineers who work in the mining and mineral processing industry must make decisions on how and when to maintain overland conveyor belts. These decisions can severely impact the production of the mine or plant if made incorrectly. Engineers use sets of ultrasonic thickness measurements taken across the width of the belt at multiple time points to estimate the wear rate and remaining useful life of the belt, which in turn inform their maintenance decisions. However, current approaches to forecasting belt wear are naive because they throw away valuable information about the spatial wear characteristics of the conveyor and do not properly account for the many sources of uncertainty.

We propose a new method for forecasting belt wear in a Bayesian framework using a set of conditional models - a data model, a process model, and a parameter model. For the data model, we adopt a functional data interpretation of the ultrasonic thickness measurements. In the process model, we describe the evolution of the smooth underlying wear profile through time using a set of gamma processes (a type of stochastic jump process). Our method accounts for the many sources of uncertainty in a principled way and incorporates the spatial characteristics of the belt wear profile to produce an easily interpretable forecast on which engineers can base maintenance decisions.

Abstracts 11

Chair: Sophie Giraudo

Estimating Life Expectancy for Aboriginal and Torres Strait Islander peoples

Mark Ioppolo, Australian Bureau of Statistics.

Life expectancy is an estimate of the average time a person is expected to live based on their age, sex, geographic location, ethnicity, and other demographic factors. In 2009 the Australian Bureau of Statistics adopted a new method for estimating life expectancy for Aboriginal and Torres Strait Islander peoples which accounts for under-identification of deaths of First Nations Peoples by linking death records to Census data. I will provide an overview of this methodology, discuss challenges, and consider emerging methods which may offer enhancements to the current method.

A Discussion on Gender Imbalances in Mathematics and Statistics

Andrea Carvalho, Australian Bureau of Statistics.

In Australia, women make up only 28% of workers in science, technology, engineering and mathematics (STEM). Many prescribe the significant gender gap to women not being well equipped for the job due to social, practical and biological disadvantages.

This talk will discuss the gender imbalance prevalent in mathematics and statistics and consider its impact. This will be justified by research and statistics and include my own personal experiences as a woman statistician. It delves into the importance of progressive measures adopted during the hiring process, the wage gap, unconscious bias and the impact of a lack of representation.