@sudokita @julietnpn @adenn might like this.
Rob gave a presentation about standard prompts and prompting strategies to help accelerate human creation of an ontology. What can it do. What can’t it do? What should it do? In short - what’s the best strategy for using LLMs to reduce effort.
Rob used Llama 3, he says…
- estimated speed-up for LLMs writing the documentation - > 95%
- estimated speed-up for LLMs writing the code - > 40%
Questions
- Rose asked
- Greg asked “have you used thinking models… do they work better than Llama 3. In theory it should kind of do this thinking for you. Or even using a better base model like GPT 4.5 or Claude 3.7 or Deepseek.”
- haven’t tried another, but this is definitely worst case scenario using Llama 3. He’s sure other smarter models could do way better.