Shaaf's blog

A technical blog about Java, Kubernetes and things that matter

Modernizing Legacy Code with Konveyor AI: From EJB to Kubernetes

I always enjoy participating in KubeCon. This time it was at the RAI center in Amsterdam. I have been to many conferences and the ones that are the best IMHO are the ones that are very community focused. For example DevNexus for Java, GeeCon for Geeks ;), and obviously KubeCon for everything Kubernetes. And obvsiouly making new friends and connections is a great way of learning from all the cool stuff thats going on. Thats probably enough name dropping for a wednesday ;)


Nano Agent, Mega Senses: Adding LSP to the 260-Line Coding Agent

Learn, learn, and learn more—that’s the name of the game. Coding agents are innovating fast; things are getting bigger and, quite often, bloated. To understand what an agent is actually doing, I’ve found it’s best to go back to the basics. It takes a bit more time, but the expertise you gain along the way sets you up for the long haul." So here I read Max’s post and thought, how about add some more things to this. Fetching ideas… done.. Lets add LSP support.


Using LLMs and MCP to generate static code analysis rules

Scribe is a Model Context Protocol (MCP) server that exposes a single tool: executeKantraOperation. That tool turns structured parameters into YAML rules compatible with Konveyor / Kantra—the static analysis pipeline used for application migration and modernization. This post describes what Scribe does, how it is wired, and concrete examples you can copy.

Static code analyzers are great at what they do. Having the ability to write custom rules is important because it can cover multiple usecases such as, if an organization has their own framework or libraries that do not exist in the public domain. Or to look for patterns or anti-patterns or even best practises such as exceptions, logging etc. It can get quite cumbersome to write these rules and test them. While every conference in the world today buzzes of the word AI, how about we put it to real practise and provide this valuable feature with LLMs. Hence the advent of Scribe MCP server that will write Konveyor Kantra rules for an LLM.


Adding Rust Support and Some Major updates to My Neovim Config

It’s been about 8 months since my last update on neovim4j, and the config has evolved significantly. The name “neovim4j” is now a bit of a misnomer—while it started as a Java-focused setup, it’s grown into a polyglot development environment.

Rust Support 🦀

The biggest addition is comprehensive Rust support. I’ve integrated:

  • rustaceanvim for advanced LSP features powered by rust-analyzer
  • crates.nvim for smart Cargo.toml management and dependency completion
  • codelldb debugger integration
  • neotest for running Rust tests directly in the editor

The Rust setup mirrors the Java tooling quality—full LSP, debugging, and testing all working seamlessly. Semantic highlighting is disabled in favor of Treesitter for more colorful syntax highlighting.


Java+LLMs: A hands-on guide to building LLM Apps in Java

I had the pleasure to present about building Java applications using LLMs together with Bazlur at GeeCon 2025. The weather was amazing and Krakow is a beautiful historical city.

Key Topics Covered

Here are the key topics from the video with direct links to those sections:

  • LangChain4j Basics: An introduction to the framework, demonstrating how it abstracts communication with various LLMs like OpenAI and Gemini using builder patterns.
  • Prompt Engineering: The speakers explain the difference between System Prompts (defining the AI’s behavior/personality) and User Prompts (the specific query).
  • AI Services & Streaming: A look at how to create high-level interfaces for AI interactions, including streaming responses for real-time chat experiences.
  • Memory Management: How to provide LLMs with context from previous conversations using providers like MessageWindowChatMemory and storing history in databases.
  • Tools (Function Calling): A deep dive into how LLMs can trigger Java methods to perform specific tasks, such as fetching web content or compiling Java code.
  • Jakarta EE Project Generator: A demonstration of using an LLM tool to generate a complete Jakarta EE project structure via a chat interface.
  • Retrieval-Augmented Generation (RAG): Using PGVector and embedding models to store and retrieve private data efficiently.
  • Chunking and Tokenization: The importance of segmenting data so the AI receives the right context without exceeding token limits.
  • Model Context Protocol (MCP): An introduction to the standard for connecting AI models to external data sources and tools.
  • Q&A Session: Discussions on prompt injection, guardrails, and testing non-deterministic AI outputs.

Next up we are both busy building a workshop about Langchain4j and its integration with Spring. If you are interested in learning more join us at JNation.pt. Bring your laptop the session will be 180 minutes and lots to code about ;)