Using `WeightIt` R package for causal inference analyses

I recently discovered WeightIt R package and was very happy with its functionality and performance. I “delegated” my code computing IPTW to WeightIt and it was faster while producing the same results, as expected.

Antibiotics utilization in Denmark: using R to solve practical tasks in epidemiology

Intro This is a blog post-chaperon for the R-Ladies meet-up Abuja. We are talking R in epidemiology, practical aspects of being R-user and going through data-supported example of how to use R for data aggregation and visualization.

Income inequality: OECD data

Data In this post I explore income inequality. The data comes from OECD, where inequality is defined as household disposable income per year. Main income inequality markers I use from the dataset are:

TidyTuesday 2022: week 1

Data I’m going to use the data I’m intimately familiar with: medication utilization in Denmark. I will visualize antidepressants use patterns. I’ll use palettes from my {hermitage} package. My favourite palettes so far are madonna_litta and hermitage_1.

Kaggle ML survey 2021

I am a doctoral student. I often wonder what my future holds in this brave and largely liberated according to some world. In EU, although 48% of doctoral graduates were women according to She Figures 2021, only 34% of researchers are women and only 24% of heads of higher education institutions are women.

TidyTuesday Starbucks Data

Data The data I use are available here. Let’s go ✌ I have no initial idea what I want to present and so made several exploratory plots to see what I deal with.

TidyTuesday Spice Girls Data

Data I use data by Jacquie Tran available here. Let’s go ✌ I chose to plot the audio features of Spice Girls tracks: danceability, energy, speechiness, acousticness, valence, liveness, and instrumentalness.

Iterative visualizations with ggplot2: no more copy-pasting

Are you tired of copy-pasting some chunks of your code over and over again? I am, too. Let’s dig into how we can improve our workflow with a bit of tidy evaluation and writing our own functions to avoid copy-pasting.

Finding your R

A story of how I started using R, struggled, and ultimately found my way and motivation to keep learning and using Rstats

Data simulation and propensity score estimation

In this post, I will play around with simulated data. The things I’ll be doing: Simulating my own dataset with null associations between two different exposures (x1 and x2) and outcomes y1 and y2 for each of exposures (4 exposure-outcome pairs) Computing propensity scores (PS) for each exposure, trimming non-overlapping areas of PS distribution between exposed and unexposed Running several logistic regression models Crude Conventionally adjusted Adjusted with standardized mortality ratio (SMR) weighting using PS Calculating how biased the the estimates are compared with the true (null) effect Data simulation First, I simulate the data for 10 confounders c1-c10, 2 exposures x1 and x2 (with 7% and 20% prevalences, respectively), 2 outcomes (y1 and y2), two exposure predictors c11-c12, and 2 predictors of the outcome c13-c14.