Six short video demos of LLM and Datasette projects
22nd January 2025
Last Friday Alex Garcia and I hosted a new kind of Datasette Public Office Hours session, inviting members of the Datasette community to share short demos of projects that they had built. The session lasted just over an hour and featured demos from six different people.
We broadcast live on YouTube, but I’ve now edited the session into separate videos. These are listed below, along with project summaries and show notes for each presentation.
You can also watch all six videos in this YouTube playlist.
- llm-logs-feedback by Matthias Lübken
- llm-model-gateway and llm-consortium by Thomas Hughes
- Congressional Travel Explorer with Derek Willis
- llm-questioncache with Nat Knight
- Improvements to Datasette Enrichments with Simon Willison
- Datasette comments, pins and write UI with Alex Garcia
llm-logs-feedback by Matthias Lübken
llm-logs-feedback is a plugin by Matthias Lübken for LLM which adds the ability to store feedback on prompt responses, using new llm feedback+1
and llm feedback-1
commands. These also accept an optional comment, and the feedback is stored in a feedback
table in SQLite.
You can install the plugin from PyPI like this:
llm install llm-logs-feedback
The full plugin implementation is in the llm_logs_feedback.py file in Matthias’ GitHub repository.
llm-model-gateway and llm-consortium by Thomas Hughes
Tommy Hughes has been developing a whole array of LLM plugins, including his llm-plugin-generator which is a plugin that can help write new plugins!
He started by demonstrating llm-model-gateway, a plugin that adds a llm serve
command which starts a localhost server running an imitation of the OpenAI API against LLM models:
llm install llm-model-gateway
llm serve
Tommy’s main demo was of llm-consortium, a plugin which can use a consortium of collaborating LLM models working together to solve problems.
llm install llm-consortium
llm consortium "Best way to prepare avocado toast" \
--models claude-3.5-sonnet \
--models gpt-4o \
--arbiter gegemini-1.5-pro-latest \
--confidence-threshold 0.8 \
--max-iterations 3 \
--output results.json
I ran this and got back:
The best way to prepare avocado toast involves selecting quality ingredients and careful preparation. Start with a thick slice of good quality bread, such as sourdough or whole grain. Toast it to your preferred level of crispness. While the bread is toasting, prepare a ripe avocado. Cut it in half, remove the pit, and scoop the flesh into a bowl. Mash the avocado with a fork, leaving some chunks for texture. Season with salt, black pepper, and a squeeze of fresh lemon juice to prevent browning. Optional additions include a touch of red pepper flakes.
Once the toast is ready, let it cool slightly before spreading the seasoned avocado evenly over it. Consider lightly rubbing a cut garlic clove on the warm toast for an extra layer of flavor (optional).
Enhance your avocado toast with your favorite toppings. Popular choices include: everything bagel seasoning, sliced tomatoes, radishes, a poached or fried egg (for added protein), microgreens, smoked salmon (for a more savory option), feta cheese crumbles, or a drizzle of hot sauce. For a finishing touch, drizzle with high-quality olive oil and sprinkle with sesame or chia seeds for added texture.
Consider dietary needs when choosing toppings. For example, those following a low-carb diet might skip the tomatoes and opt for more protein and healthy fats.
Finally, pay attention to presentation. Arrange the toppings neatly for a visually appealing toast. Serve immediately to enjoy the fresh flavors and crispy toast.
But the really interesting thing is the full log of the prompts and responses sent to Claude 3.5 Sonnet and GPT-4o, followed by a combined prompt to Gemini 1.5 Pro to have it arbitrate between the two responses. You can see the full logged prompts and responses here. Here’s that results.json output file.
Congressional Travel Explorer with Derek Willis
Derek Willis teaches data journalism at the Philip Merrill College of Journalism at the University of Maryland. For a recent project his students built a Congressional Travel Explorer interactive using Datasette, AWS Extract and Claude 3.5 Sonnet to analyze travel disclosures from members of Congress.
One of the outcomes from the project was this story in Politico: Members of Congress have taken hundreds of AIPAC-funded trips to Israel in the past decade.
llm-questioncache with Nat Knight
llm-questioncache builds on top of https://llm.datasette.io/ to cache answers to questions, using embeddings to return similar answers if they have already been stored.
Using embeddings for de-duplication of similar questions is an interesting way to apply LLM’s embeddings feature.
Improvements to Datasette Enrichments with Simon Willison
I’ve demonstrated improvements I’ve been making to Datasette’s Enrichments system over the past few weeks.
Enrichments allow you to apply an operation—such as geocoding, a QuickJS JavaScript transformation or an LLM prompt—against selected rows within a table.
The latest release of datasette-enrichments adds visible progress bars and the ability to pause, resume and cancel an enrichment job that is running against a table.
Datasette comments, pins and write UI with Alex Garcia
We finished with three plugin demos from Alex, showcasing collaborative features we have been developing for Datasette Cloud:
- datasette-write-ui provides tools for editing and adding data to Datasette tables. A new feature here is the ability to shift-click a row to open the editing interface for that row.
- datasette-pins allows users to pin tables and databases to their Datasette home page, making them easier to find.
- datasette-comments adds a commenting interface to Datasette, allowing users to leave comments on individual rows in a table.
More recent articles
- A selfish personal argument for releasing code as Open Source - 24th January 2025
- Anthropic's new Citations API - 24th January 2025