Technology writer and software engineer focused on high-performance low-latency big-data Java, Python, C/C++ programming.
I’ve gotten a lot of questions about continuous production profiling lately. Why would anyone want to profile in production, or, if production profiling seems reasonable, why the heck leave it on continuously?
I thought I’d take a few moments and share my take on the problem and the success I’ve seen the past years applying continuous production profiling in systems in the real world.
Profiling these days is no longer limited to high overhead development profilers. The capabilities of the production time profilers are steadily increasing and their value is becoming less controversial, some preferring them for complex applications even during development.
About 7 years ago, I attended a session given by Java Champion Peter Lawrey, leader of Chronical Software, at a JavaOne conference. Since most of my prior development work in the realm of low-latency high-performance was C/C++ software, I was very interested in hearing what Peter might say about how Java addresses this problem.
I caught up with Peter again recently, and asked him some questions about what’s happened since then, and where we are today. Here are my questions and Peter’s responses.
Java bytecode is somewhat similar to the Assembly Language utilized by languages like C. Bytecode is a low level instruction set that the JVM executes in order to enact the processing created by the developer with their Java (or any other JVM language) program.
In this article, we see that the time to execute the bytecode for a simple program is a small fraction of the time to compile it using javac.
For my book “Getting Started with Java on Raspberry Pi”, an example was described to store sensors and measurements in an H2-database through REST APIs with a Spring application on the Raspberry Pi.
The application takes some time to start on a Raspberry Pi, and Adam Bien who does the airhacks.fm podcast, asked me if I could compare this to a similar Quarkus application, which resulted in some nice results.
Everyone who programs in Java, or any of the other languages built on top of the Java Virtual Machine (Scala, Closure, Kotlin, Groovy, Nashorn, Jython, JRuby, et al.) is familiar with the term “bytecode.”
But how many of us understand what JDK bytecode actually is?
The HotSpot JVM has a lot of options available. Maybe too many. Sometimes we are looking for a specific option or the “magic” one that can give a serious boost in an application.
I have summed up here, in my humble opinion, some of the most useful JVM options in the context of heap sizing.
The brilliance of the Java Virtual Machine is that it is itself an operating system.
In other words, if you use the JVM as your base platform, you don’t have to worry about numerous “if” statements related to the specifics of hardware and operating systems.
The JVM takes care of all of that for you. Whatever you write, it’s going to run perfectly on any operating system and hardware that supports the Java Virtual Machine.
JmFrX is a small utility which allows you to capture JMX data with Java Flight Recorder.
In this blog post I’m going to explain how to use JmFrX for recording JMX data in your applications, point out some interesting JmFrX implemention details, and lastly will discuss some potential steps for future development of the tool.
During a code review, I suggested some code improvement related to JDK8+ streams.
Here is a discussion on readability and performance!
The Java Flight Recorder (JFR) is an invaluable tool for gaining deep insights into the performance characteristics of Java applications.
In this blog post, we’re going to explore how custom, application-specific JFR events can be used to monitor a REST API, allowing to track request counts, identify long-running requests, and more.
Learn the intricate details of how JVM applications see container resources and how it impacts heap, CPU, and threads.
When you containerize a Java application, make sure you use a base JDK image that is container-aware so that the JDK can allocate memory and CPU counts properly.