Insights from Serving on a Mental Health Research Committee
Reflecting on my experience serving on the NIMH Mental Health Services Research Committee (SERV) and reviewing applications focused on mental health services, this piece discusses the process, timelines, types of studies reviewed, and grant reviewing criteria followed. The narrative provides valuable insights into the review process and the responsibilities of committee members in evaluating research proposals in the mental health field.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Lessons learned on a study section
My background Biostatistician by training Served a three-year term (2016-2019) on the NIMH Mental Health Services Research Committee (SERV) SERV will review applications focused on research topics in mental health services, including delivery and financing of services; accessibility, use, quality, cost, and outcomes of services; the impact of health and insurance policy changes; and dissemination and implementation of evidence-based and promising new preventive, treatment, and services interventions. Have also reviewed on other NIH study sections and PCORI too
SERV Observational studies Administrative data Sensor data Social network data Disease areas: depression, autism, schizophrenia, suicide R01s, R21s, Ks I usually don t get assigned to R21s
Timeline SERV meets 3 times per year About 1.5 months prior to a meeting, the SRO sends a list of all applications that will be reviewed to identify conflicts of interest: Members of your institution People you have published with, etc. A week after that, receive a list of applications to review SERV meets for 1 day. I get about 6 applications Assigned as either 1st, 2nd, 3rd, or 4threviewer
Timeline contd Even though I have 1 month to review 6 applications, I always wait until the last week Spend 4-5 hours per application. Applications are around 200 pages Comments are due (uploaded on ERA Commons) 4-5 days before in- person meeting in DC. Reviewers are expected to read the comments of other reviewers who reviewed the same grants they did In DC, the applications scoring in the top half are discussed. The rest are triaged. Applicants get scores and comments only.
How I review a grant Scoring sheet. Need to fill out strengths and weakness based on 5 criteria Significance Investigator(s) Innovation Approach Environment For each criterion assign a score from 1 (best) to 9 (worst) Provide overall impact score which is a weighted (your own weights) score of the 5 individual scores
How I review a grant contd Summary: I m trying to fill out the sheet. Looking for strengths and weakness. Make it easy for the reviewer to find strengths: Say, The strengths of this application are . Make the strengths (bolded) sub-headings under each of the criteria Avoid weakness or maybe mention them and then say why they are less important or not fatal flaws
My grant review steps, contd My order of review Abstract: Project narrative Facilities Biosketches, budget justification: Budget Then I print out the Specific Aims and Research strategy and read carefully, making note of strengths and weaknesses, flipping back to other sections of proposal as needed Rest: Human subjects, women and minorities, children, resource sharing plan
What Im looking for: Significance Usually give this a high score, but not a lot of weight in overall impact score SERV topics tend to be important. Also, I don t have the subject-matter expertise to really assess this Investigator(s) Has the PI run large projects before? Do they have previous grants? Does the PI have a good publication record? Have Investigators worked together and published together before? Do they have sufficient time allocated to the project? Are they at the same institution or spread out all over the US? Is there a Biostatistician on the project for an adequate amount of time?
Innovation From NIH: Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?
Innovation contd If innovation is not required to answer the research question then (I think) that is okay and a low innovation score will not hurt the overall impact score But one certainly wants to use up to date methods, designs, technologies, etc.
Approach Meat of the proposal As a Biostatistician, I focus on design and analyses methods Figures are good White space is good Headers are good
Environment Usually ok for most applications Should be targeted to the proposal Annoying when applications just dump 40 pages of boilerplate
In person meeting Each grant is given around 15 minutes There are around 30 people in study section but only 4 have been assigned to each application For each grant, we begin by each reviewer providing their overall summary score. Then 1streviewer introduces the application, providing a summary of investigators and project and then their impressions of the grant, Remaining reviewers then give their impressions also noting where they agree and disagree with other reviewers
In person meeting contd Often, when it is the 4threviewers turn, they might just say, I have nothing to add. This is encouraged in order to keep things on schedule Reviewers then discuss the application, asking each other questions, getting clarification etc. Then rest of the panel has opportunity to ask questions, give comments etc. Other panel members often look at specific aims page, some might read a grant not assigned to them if it looks interesting Then the Chair will provide a summary of application, strengths and weakness. And will ask 4 reviewers for the final scores
In person meeting contd Finally, everyone on panel records their final overall impact score NOTE: all final overall impact scores must be within the range provided by the 4 reviewers. So if reviewers score 2 s and 3 s, panel members cannot give a score outside of this range If they want to they can, but most provide some verbal justification Corollary: Over the course of the day, as it gets harder to pay attention, panel members may just score the average of the 4 reviewers. So one bad reviewer score can bring down an application
3 tips for providing statistical support to grant proposals
Tip 1 Read the whole grant including the statistical analysis plan. Need to understand the study in order to provide appropriate methods. Really important that the methods can answer the research question. It is hard to know what the research question is if you haven t read the grant. Some things you can learn by reading the grant: The PI does not know what the research question is either The effect size is likely to be very small and you need to take account of this in the power analysis or suggest a different study design The trial is better framed as a non-inferiority trial
Tip 2 If a clinical trial, make use of the statistical analysis plan No page limit. A detailed plan is sign of diligence and reflects well on the study team. The statistical reviewer (probably no one else) will flip to it if they are unclear on something in the main grant and it is great if they find the answer to their question. Don t just copy and paste what is already in the grant. Power analyses based on a range of effect sizes Detailed mediation plan Detailed plan for missing data Equations ok too.
Tip 3 Avoid boilerplate language. This is easy to identify, is not always appropriate, and is boring to read. Also a sign of laziness. I see boilerplate language a lot in grants with machine learning aims. Reads like a laundry list of different methods that have not been modified to the current aims Mediation analyses too.