Elicit is now better at answering your questions about papers - 

Answers to population, intervention, and outcome are better.
When we first launched, these answers were ~ 60% accurate. Now they are ~ 85% accurate. This is how well our human research assistants perform this task! 

Answers to custom questions are better
Last week they were ~ 61% accurate. Now they are ~ 69%. 


Musings (feel free to skip)
Before pre-trained language models like GPT-3, NLP models were "single-use". People built one model just for extracting the population studied. Another model just for classifying whether or not there was an effect, and so on.

Now, with pre-trained models, we can use a single model to perform all of these "tasks."

We take one model and ask "What population was studied?" to fill the population column. We ask the same model "What was the outcome?" to fill the outcome column. Once we built the infrastructure to answer one or two questions, we could quickly extend it to answer any question.

We're already thinking about how we can automatically suggest relevant questions and columns for you given your overall research question. And how you can save sets of questions as templates across multiple projects. 🙂

In the meantime, interrogate your papers here!

If you don't want to get these emails, you can update your preferences or unsubscribe from this list.