Robin looks for meta-analysis alternatives 1: JamoviMeta.

Meta-analyses. So much meta, many analyses. I’ve done a few, two are under review, and two almost ready for submission. Red thread in this is the Comprehensive Meta-Analysis (CMA) meta-analysis software package. CMA has brought the practice of meta-analysis (or ‘an exercise in mega-silliness‘, as Eysenck called it) to a broader audience because of its relative ease of use. Downside of this relative ease of use is the unbridled proliferation of biased meta-analyses that serve only ‘prove’ something works, but let’s not get into that – my blood pressure is high enough as it is.

Some years back, CMA changed from one-off purchases to an annual subscription plan, ranging from $195-$895 per year per user, obviously taking hints from other lucrative subscription-based plans (I’m looking at you, Office365). Moreover, CMA has a number of very irritating bugs and glitches: just to name a few, there’s issues with copying and pasting data, issues with not outputting high-resolution graphics but just a black screen, issues with system locale, etc. etc. On the whole, CMA is a bit cumbersome and expensive to work with, and I’ve been telling myself to go and learn R for years now; if anything to use the Metafor package, which is widely regarded as excellent.

Would I like some cheese with my whine?

However, I never found the time to take up the learning curve needed for R (i.e., I’m too stupid and lazy), and while recently whining on Twitter about how someone (most definitely not me) should make a graphical front-end for R that doesn’t pre-suppose advanced degrees in computer science, voodoo black arts and advanced nerdery; Wolfgang Viechtbauer pointed me to JamoviMeta.

In my quest to find a suitable alternative to CMA that even full-on unapologetic troglodytes like me can understand – let’s give it a test drive!

DISCLAIMER: Most of the time I have no idea that I’m doing, as will be readily apparent to any expert after even a cursory glance.

INSTALLING AND FIRST GLANCE

I was redirected to a github page, which instructed me to first download Jamovi, and add the module MetaModel.jmo.

Never heard of Jamovi before, but let’s give it a shot – the installer seems straightforward, MetaModel is an add-on for the Jamovi software package, which is itself a fairly new initiative at an “open” statistics package. I’m not entirely sure if Jamovi itself is an add-on to R, but at this point that’s not particularly relevant for what I want to do.

The main screen of Jamovi looks simple, clean and friendly. Now, to ‘sideload’ MetaModel. Nothing in the menu so, click Modules, sideload, find the downloaded MetaModel.jmo and import it.

ENTERING DATA

JamoviMeta main window

It’s not immediately apparent where I should start – the boxes with labels like “Group one sample size” look inviting as text boxes, but entering information doesn’t work. Using the horizontal arrow to shift the 3 bubbles with “A” on the left panel to the right doesn’t work and flashes the little yellow ruler(?) in the text box which isn’t a text box.

Entering variables (note how the dialogue box resembles SPSS).

The grey arrow pointing to the right brings me to a spreadsheet-like… Well, spreadsheet. Ah! The A, B, C refer to columns in this spreadsheet, and the software’s expecting data as you’d expect: study name, sample size, mean, standard deviations. Jamovi seems to automatically recognise the type of data I’ve entered, but also seems thrown off by my use of a comma instead of a period. Incidentally, this is/was a major issue with CMA, which depends your computer’s ‘locale’ settings – if you’re from a country that uses dots for thousands and commas for decimals (eg, €10.000,00) and you send a data file to a colleague who has US numbering (eg, $10,000.00), the data would be all screwed up. Adding variable labels isn’t immediately apparent either, but double-clicking a column header and then double clicking the letter of the column lets you change the label.

Variable labels & type window

Having entered the data, I go back to “Analyse”, and try to enter my newly made data into MetaModel. Everything works, except… It won’t accept the sample sizes for my data. When I try to, it flashes the yellow ruler (?) in red – Ah, this probably means it wants continuous data, but the sample sizes had been interpreted as ordinal data as denoted by the three bubbles (same icons as in SPSS).

This being corrected, MetaModel goes straight to work (apparently), and tells me “Need to specify ‘vi’ or ‘sei’ argument”. Well obviously. More random clicking is in order, I think – that’s never failed me, since psychology students are taught to keep clicking until the window says p<0.05 or smaller*). I’ve only just entered data, and haven’t actually told MetaModel what to do so it’s no surprise that nothing works.

I flip open ‘Model options’, ‘plots’ and ‘publication bias’.

…I quickly close ‘publication bias’ again, as it only shows options for Fail-safe N. Let us never mention Fail-safe N again, and I hope the developer removes this option ASAP. I am aware of the current discussion of how Trim & Fill probably doesn’t work very well either (nor does anything else, apart from 3PSM apparently, but I think everyone can probably agree that Fail-safe N should never be used.

Clicking around a bit (I won’t go into all the different types of meta-analysis model estimators), I find out that I have to choose either ‘Raw Mean Difference’ or ‘Log Transformed Ratio of Means’ to make the “Need to specify ‘vi’ or ‘sei’ argument” message go away. Not sure what this is about. However, all this looks encouraging, and it’s time for real data.

I prepared a small data file in CMA, based on a meta-analysis we’re currently working on, using Excel as an intermediary as CMA’s data import/export capabilities as non-existent and I need to change all decimal commas to decimal points, and copy-paste the data into MetaModel. Small issue: there’s no fixed column for subgroups within studies (or maybe I’m just doing it wrong, so I renamed the studies into Kok 2014, A, B, etc.

JamoviMeta data window

CMA data window

THE ANALYSES

However, running the analyses from here on was straightforward, easy and quick. The results are pretty much consistent with CMA (I used a DerSimonian-Laird model estimator, I think that is the CMA standard). I saw no strange differences or outliers, apart from a few (not particularly large) differences in effect sizes. These are probably due to subtle differences in calculations, but I take it both CMA and MetaModel have their own set of assumptions for calculations which explain the small variations. Kendall’s tau was even spot on.

MetaModel main results

CMA main results (click for bigger image)

EXPORTING OUTPUT

MetaModel has tackled one of my biggest gripes with CMA: high-quality images. Unhelpfully, CMA’s so-called ‘high resolution’ outputs have been quirky, ugly and too low resolution for most journals as it would only export to Word (ugh), Powerpoint (really?) and .WMF (WTF?). In MetaModel, right-clicking e.g., the funnel plot gives you the option

Right-click graphics export options

to export the image to a high-quality PDF which looks crisp and clear (download sample PDFs of the MetaModel funnel plot and MetaModel Forest plot here).

MetaModel forest plot

CMA “high resolution” forest plot

MetaModel funnel plot

CMA funnel plot (with imputed studies)

THE VERDICT:

If this is a ‘beta’, it looks and work better than OpenMetaAnalyst ever did (although to be fair, I should revisit that some time). The developer (Kyle Hamilton) has done an impressive job in coding relatively simple, but very usable module for meta-analysis. It is lightyears faster than CMA (which can crawl to a virtual stand-still on my i3 laptop) and can output high-quality graphics. Also, it does real-time analyses so there’s no need to keep mashing that “-> Run analyses” button after making small changes. Choosing Jamovi as a front-end was a good bet – its interface looks friendly modern and crisp. Of course, features are missing and this was just a very quick test run, but my first impression is very good. I’d very much like to see where this is going.

THE GOOD:

  • Pretty much MWAM (Moron Without A Manual) proof.
  • Feels much more modern than CMA. Looks better. MUCH faster.
  • More model estimators than CMA.
  • Contour-enhanced funnel plots and prediction intervals. Nice addition.
  • So far, no glitches or crashes.
  • It’s free!

THE BAD:

  • Hover-over hints (contextual information if you hover over a button) would be nice
  • Error messages aren’t especially helpful

THE UGLY:

  • Fail-safe N.

THE REQUESTS:

  • Modern alternatives for publication bias, e.g. p-curve, p-uniform, PET(-PEESE) or 3PSM.
  • 95% CIs around I²
  • Support for multiple subgroups and timepoints?

 

*) Only a slight exaggeration: this is what students teach themselves.

 

 

 

 

Good heavens, my h-index is still irrelevant.

My H-index rose from 6 (“HaaaLOSER“) to 7 (“mind-numbingly tedious and uninteresting“). At some point this year maybe it’ll rise to 8 – “Like a fully gorged woodlouse penis.

Meanwhile, here’s a silly comparison about the parallels between being in a band vs. being in academia. Good to know that if one failing career fails me, I can always go back to another failing career that failed me.

High-resolution Risk of Bias assessment graph… in Excel!

Some years ago, I found myself ranting and raving at the RevMan software kit, which is the official Cochrane Collaboration software suite for doing systematic reviews. Unfortunately, either because I’m an idiot or because the software is an idiot (possibly both), I found it impossible to export a Risk of Bias assessment graph at a resolution that was even remotely acceptable to journals. These days journals tend to accept only vector-based graphics or bitmap images in HUGE resolutions (presumably so they can scale these down to unreadable smudges embedded in a .pdf). At that time I had a number of meta-analyses on my hands so I decided to recreate the RevMan-style risk of bias assessment graph, but in Excel. This way anyone can make crisp-looking risk of bias assessment graphs at a resolution higher than 16dpi (or whatever pre-1990 graphics resolution RevMan appears to use…)

The sheet is relatively easy to use, just follow the embedded instructions. You need (1) percentages from your own risk of bias assessment (2) basic colouring skills that I’m sure you’ve picked up before the age of 3. All you basically do to make the risk of bias assessment graph is colour it in using Excel. It does involve a bit of fiddling with column and row heights and widths, but it gives you nice graphs like these:

Sample Risk of Risk of Bias assessment graph

Sample Risk of Bias Graph

Like anything I ever do, this comes with absolutely no guarantee of any kind, so don’t blame me if this Excel file blows up your computer, kills your pets, unleashes the Zombie Apocalypse or makes Jason Donovan record a new album.


 

Download available here (licensed under Creative Commons BY-SA):

UPDATE September 2016 – a friendly e-mailer noted that the sheet was protected to disallow column formatting (which makes the thing useless). Version 2.4 corrects this.

UPDATE January 2017 – another friendly person noted that I’m an idiot and hadn’t fixed the column formatting problem in the full Cochrane version of the Excel. Will I ever learn? Probably not. Version 2.5 corrects this (and undoubtedly introduces new awful bugs).

Risk of Bias Graph in Excel – v2.5

MD5: BA8F1F1F830742C8E206C86F1BB31089


 

 

eMental Health interview with VGCt [Dutch]

Nothing like an interview on eMental Health to make you feel important

I’m still reeling from the festivities surrounding my H-index increase from 3 (“aggressively mediocre“) to 4 (“impressively flaccid but with mounting tumescence“)*. Best gift I got: a sad, weary stare from my colleagues. Yay! But back to eMental Health (booooo hisssss).

Some while back I did an interview (in Dutch) with Anja Greeven from the Dutch Association for Cognitive Behavioural Therapy [Vereniging voor Gedragstherapie en Cognitieve Therapie] for their Science Update newsletter in December 2015. It’s about life, the universe and everything; but mostly about eHealth and eMental Health; implementation (or lack thereof), wishful thinking, perverse incentives (you have a filthy mind) and that robot therapist we’ve all been dreaming about (sorry, Alan Turing).

Kudos to me for the wonderful contradiction where I call everyone predicting the future a liar and a charlatan; after which I blithely shoot myself in the foot by trying to predict the future. In my defense, I never claimed I wasn’t a liar and a charlatan. It was great fun blathering on about all kinds of things, and massive respect to Anja who had to wade through a 2-hour recording of my irritating voice to find things that might pass as making sense to someone, presumably.

Anyway, the interview is in Dutch, so good luck Google Translating it!


Link to the VGCt interview in .pdf [Dutch]

 

*) Real proper technical sciencey descriptions for these numbers, actually. The views expressed in this interview are my own; and nobody I know or work for would ever endorse the silly incoherent drivel I’ve put forward in this interview.

Save the Data! Data integrity in Academia

Data integrity is integral to reproducibility.

I recently read something on an Internet web site called Facebook, it’s supposed to be quite the thing at the moment. Friend and skeptical academic James Coyne, whose fearless stabs at the methodologically pathetic and conceptually weak I much admire, instafacetweetbooked a post over at Mind the Brain, pointing to a case in post-publication peer review that made me wonder whether I was looking at serious academic discourse or toddlers in kindergarten trying to smash each other’s sand castles. James and I have co-authored a manuscript about the shortcomings in psychotherapy research which is available freely here, and I’m ashamed so say that I still haven’t met up with James in person, although he’s tried to get a hold of me more than once when he was in Amsterdam.

Anyway, point in case, during post-publication peer review where these reviewers highlighted flaws in the original analysis, the original authors had manipulated the published data to pretend the post-publication peer reviewers were a bunch of idiots who didn’t know what they’re doing. This is clearly pathetic and must have been immensely frustrating for the post-publication reviewers (it was a heroic feat in itself to be able to prove such devious manipulations in the first place, thankfully they took close note of the data set time stamps).

What can be done? Checking time stamps is trivial, but so is manipulating time stamps. My mind immediate took to what nerdy computery types like me have used for a very, very long time: file checksums. We use these things to check whether, for example, the file we just downloaded didn’t get corrupted somewhere along the sewer pipes of the Internet. Best known, probably, are MD5-hashes, a cryptographic hash of the information in a file. MD5-hases are unique: they are composed of 32 alphanumeric characters (A-Z, 0-9) which yields (26+10)^32 = 6,3340286662973277706162286946812e+49 different combinations. That’ll do nicely to catalogue all the Internet’s cat memes with unique hashes from decades past and aeons to come, and then some. So, if I were to download nyancat.png from www.nyancatmemerepository.com, I could calculate the hash of that downloaded file using, e.g., the excellent md5check.exe by Angus Johnson, which gives me a unique 32-character hash; which I could then compare with the hash as shown on www.nyancatmemerepository.com. Few things are worse than corrupted cat memes, really, but let’s consider that these hashes are equally useful to check whether a piece of, say, security software, wasn’t tampered with somewhere between the programmer’s keyboard and your hard drive – it’s the computer equivalent of putting a tiny sliver of sellotape on the cookie jar to see that nobody’s nicking your Oreos.

How can all this help us in science and the case stated above? Let’s try to corrupt some data. Let’s look at the SPSS sample data file “anticonvulsants.sav” as included in IBM SPSS21. It’s a straightforward data set looking at a multi-centre pharmacological intervention for an anticonvulsant vs. placebo, for patients followed for a number of weeks, reporting number of convulsions per patient per week as a continuous scale variable. The MD5 hash (“checksum”) for this data file is F5942356205BF75AD7EDFF103BABC6D3 as reported by md5check.exe.

screenie1

First, I duplicate the file (anticonvulsants (2).sav), and md5check.exe tells me that the checksum matches with the original [screenshot] – these files are bit-for-bit exactly the same. The more astute observer will wonder why changing the filename didn’t change the checksum (bit-for-bit, right?). Let’s not go into that much detail, but Google most assuredly is your friend if you really must know.

Now, to test the anti-tamper check, let’s say we’re being mildly optimistic about the number of convulsions that our new anticonvulsant can prevent. Let’s look at patient 1FSL from centre 07057.  He’s on our swanky new anticonvulsant, and the variable ‘convulsions’ tells us he’s had 2, 6, 4, 4, 6 and 3 convulsions each week, respectively. But I’m sure the nurses didn’t mean to report that. Perhaps they mistook his spasmodic exuberance during spongey-bathtime as a convulsion? Anyway. I’m sure they meant to report 2 fewer convulsions per week as he gets the sponge twice a week, so I subtract 2 convulsions for each week, leaving us with 0, 4, 2, 2, 4 and 1 convulsions.

Let’s save the file and, compare checksums against the original data file.

screenie2

Oh dear. The data done broke. The resulting checksum for the… enhanced dataset is E3A79623A681AD7C9CD7AE6181806E8A, which is completely different from the original checksum, which was F5942356205BF75AD7EDFF103BABC6D3 (are you convulsing yet?).

Since the MD5-hashes are unique, changing just a single bit of information in a data file compromises data integrity; and regular numbers take up more than just one bit of information. Be it data corruption or malicious intent, if there’s a mismatch in files then there’s a problem. Is this a good point to remind you that replication is a fundamental underpinning of science? Yes it is.

This was just a simple proof-of-concept and I sure this has been done before. The wealth of ‘open data’ means that data are – to both honest re-analysis and dishonest re-analysis. To ensure data-integrity, when graciously uploading raw data with a manuscript, why not include some kind of digital watermark? In this example, I’ve used the humble (and quite vulnerable) MD5-hash to show how a untampered dataset would pass the checksum test, making sure that re-analysts are all singing from the same datasheet as the original authors, to horribly butcher a metaphor. Might I suggest, “Supplement A1. Raw Data File. MD5 checksum F5942356205BF75AD7EDFF103BABC6D3”.

 

H-index update: still pathetic.

Oh lookie – my H-index went up.

That is academic-speak for “my dick just got a bit less small“. The H-index rose from 2 (“oppressively pathetic“) to 3 (“aggressively mediocre“). At some point this year maybe it’ll rise to 4 – “impressively flaccid but with mounting tumescence“.

For an explanation of what all the hubbub is about, check out the Wikipedia page on the H-index. TL;DR: I have 3 papers which have been cited at least three times. My best-cited paper is still the systematic review “Persuasive System Design Does Matter: A systematic review of adherence to web-based interventions” at JMIR (I’m second author).

 

-RK

New paper in the bulletin of the EHPS

765-823-1-PB-1

What’s up with the speed of eHealth implementation?

Fresh off the virtual press at the bulletin of the European Health Psychology Society: Jeroen Ruwaard and I investigate into the rapid pace of eHealth implementation. Many bemoan the slow implementation and uptake of eHealth, but aren’t we in fact going too quickly? We examine four arguments to implement unvalidated (i.e., not evidence-based) interventions and find them quite lacking in substance, if not style.

Ruwaard, J. J., & Kok, R. N. (2015). Wild West eHealth: Time to Hold our Horses? The European Health Psychologist, 17(1).

Download the fulltext here [free, licensed under CC-BY].

Re-amping Ritual, Rejoice!

In preparing the re-issue of our critically acclaimed but sold out 2003 debut album The Apotheosis, we decided to have a little fun and re-record a few of the old tracks, just a tad shy of 10 years later. And wow, has technology come a long way since 2002/2003. We now do basically everything ourselves. No wait, we literally do everything ourselves, apart from mixing and mastering. Most of us still remember fiddling about on little 4-track Tascam recorders that used ordinary cassette tapes, nowadays we do 8-track digital stuff in Protools in unimaginable sound quality without even batting an eyelid. Now, I’ve always been a big fan of re-amping.

The contenders. Left to right: Røde NT1000, Shure SM58, Audio Technica ATM25 (x2), Audio Technica ATM21, Audio Technica ATM31R, Audio Technica AT4033a, AKG D112.

The contenders. Left to right: Røde NT1000, Shure SM58, Audio Technica ATM25 (x2), Audio Technica ATM21, Audio Technica ATM31R, Audio Technica AT4033a, AKG D112.

Long story short, it means not recording a thundering amp while you play, but record just the instrument and play it back through an amp later. This has a number of advantages, but for DIY-types like us the biggest is having total control over your sound while you’re not playing. Essentially you get to be the bass player and sound engineer in one and you don’t have to play something, listen back, put down your bass, fiddle with your amp/microphone, put on your bass, play something, do it all over again, ad nauseam. Armed with a nice selection of microphones we set to with an Ampeg 8×10 loaned to us by Tom of the almighty Dead Head. I used my SVP-PRO (we are inseparable) and trusty Peavey power amp, and started experimenting with microphone placement and combinations.

D112, ATM25 and AT4033a in action. NT1000 to the far left in the corner, not in the pic.

D112, ATM25 and AT4033a in action. NT1000 to the far left in the corner, not in the pic.

The winning combination turned out to be the ATM25 off-axis, right on the edge of the cone at 45 degrees, edged back just about an inch, with the AT4033a at 70 cms (2.3 feet), just about in the vertical centre of the 8×10.

Tadaa.

Tadaa.

I used Audacity to make these cool plots, and the graphs clearly show the differences in microphone signals. At the end of the day, the D112 was too boomy anywhere near the speaker cone (the very proximity effect the D112 is ‘famous’ for), the ATM25 sounded simply more gritty, dark and… well, evil. The AT4033a complemented the ATM25 perfectly, topping off the ATM25’s low-end gurgle with a snappy, gnarly high-mid end. Interestingly, the Røde NT1000 stashed away in the far corned picked up quite some lows and mids as you can see by the huge hump below 100Hz, but I’m not sure we’re going to use it (there is quite an audible rattle in there somewhere from something vibrating).

layered-mics

Shoddily pasted graph showing the frequency responses of the different mikes in their different settings. Note the huge low-end response on the NT1000 condenser!

Here are some sound samples, straight from the board with just a touch of compression (1:2.5, 0.1msec attack, 2sec decay).

Røde NT-1000

Audio Technica ATM4033a

Audio Technica ATM25

AKG D112

Quick thought

Meta-analyses are ventriloquist’s dummies. Sitting on a wise man’s knee they may be made to utter words of wisdom; elsewhere, they say nothing, or talk nonsense, or indulge in sheer diabolism.” – Adapted from Aldous Huxley

Corrected JMIR citation style for Mendeley desktop

Endnooooooooo!te.

100 out of 100 academics agree that working with Endnote is about as enjoyable as putting your genitals through a rusty meat grinder while listening to Justin Bieber’s greatest hits at full blast and being waterboarded with liquid pig shit. I’ve spent countless hours trying to salvage the broken mess that Endnote leaves and have even lost thousands of carefully cleaned and de-duplicated references for a systematic review due to a completely moronic ‘database corruption’ that was unrecoverable.

Thankfully, there is an excellent alternative in the free, open source (FOSS) form of Mendeley Desktop, available for Windows, OS X, iToys and even Linux (yay!).

One of the big advantages of Mendeley over Endnote, apart from it not looking like the interface from a 1980s fax machine, is the ability to add, customise and share your own citation styles in the .csl (basically xml/Zotero) markup. While finishing my last revised paper I found out that the shared .csl file for the Journal of Medical Internet Research (a staple journal for my niche) is quite off and throws random, unnecessary fields in the bibliography that did not conform to JMIR’s instructions for authors.

The online repository of Mendeley is pretty wonky and the visual editor isn’t too user friendly, so I busted out some seriously nerdy h4xx0rz-skillz (which chiefly involved pressing backspace a lot) .

Get it.

Well, with some judicious hacking, I present to you a fixed JMIR .csl file for Mendeley (and probably Zotero, too). Download the JMIR .csl HERE (probably need to click ‘save as’, as your browser will try to display the xml stream). It’s got more than a few rough edges but it works for the moment. Maybe I’ll update it some time.

According to the original file, credits mostly go out to Michael Berkowitz, Sebastian Karcher and Matt Tracy. And a bit of me. And a bit of being licensed under a Creative Commons Attribution-ShareAlike 3.0 License. Don’t forget to set the Journal Abbreviation Style correctly in the Mendeley user interface.

Oh, I also have a Mendeley profile. Which may or may not be interesting. I’ve never looked at it. Tell me if there’s anything interesting there. So, TL;DR: Mendeley is FOSS (Free Open Source Software), Endnote is POSS (Piece of Shit Software).

Update: A friendly blogger from Zoteromusings informed me in the comments that I was wrong: Mendeley is indeed not FOSS but just free to use, and not open source. Endnote is still a piece of shit, though. I was right about that 😉