I had the pleasure to present about building Java applications using LLMs together with Bazlur at GeeCon 2025. The weather was amazing and Krakow is a beautiful historical city.
Key Topics Covered
Here are the key topics from the video with direct links to those sections:
- LangChain4j Basics: An introduction to the framework, demonstrating how it abstracts communication with various LLMs like OpenAI and Gemini using builder patterns.
- Prompt Engineering: The speakers explain the difference between System Prompts (defining the AI’s behavior/personality) and User Prompts (the specific query).
- AI Services & Streaming: A look at how to create high-level interfaces for AI interactions, including streaming responses for real-time chat experiences.
- Memory Management: How to provide LLMs with context from previous conversations using providers like
MessageWindowChatMemoryand storing history in databases. - Tools (Function Calling): A deep dive into how LLMs can trigger Java methods to perform specific tasks, such as fetching web content or compiling Java code.
- Jakarta EE Project Generator: A demonstration of using an LLM tool to generate a complete Jakarta EE project structure via a chat interface.
- Retrieval-Augmented Generation (RAG): Using PGVector and embedding models to store and retrieve private data efficiently.
- Chunking and Tokenization: The importance of segmenting data so the AI receives the right context without exceeding token limits.
- Model Context Protocol (MCP): An introduction to the standard for connecting AI models to external data sources and tools.
- Q&A Session: Discussions on prompt injection, guardrails, and testing non-deterministic AI outputs.
Next up we are both busy building a workshop about Langchain4j and its integration with Spring. If you are interested in learning more join us at JNation.pt. Bring your laptop the session will be 180 minutes and lots to code about ;)
