OK. You're digging deeper here, but there's still a level of abstraction in your example.
Let's go all the way.
As we all know, the screen emits frequencies of light from light emitting elements called pixels.
When all the pixels are put together, we get a picture or a string, etc.
All the screen knows is what light to emit from each pixel.
So, if I wanted to print a string to the screen, at the bottom level, I would have to tell each pixel in some area of the screen to display a color that stands out from the color emitted by the pixels around it in such a a way that a pattern emerges that matches letters from our alphabet that form a sentence.
How would you code that?
I mean, you could encode the frequency in digital, store the bits in a memory location for each pixel, then send the whole shebang to the screen so that it can redraw itself.
Heh, now you are talking about driver development.
The first thing to realize is that code doesn't do everything. You can only program what is made to be programmed (we will ignore HDLs for now). Most of the "handle the frequency of when the color information" stuff is built right into the chip rather than in any sort of programmable interface (well, at least it was in the VGA days, I don't know enough about modern video stuff to comment on the programmability of the DVI and HDMI chips).
Generally, there is a chunk of memory with just enough bytes to represent the number of pixels on the screen. The chip that communicates with the monitor goes through that array of memory and simply spits out whatever value is stored in it. When talking about VGA communication, it does this in the most bizarre way possible, by varying the resistance of the connection between the monitor and the video output. For modern digital standards it is literally just spitting out the contents of the memory and letting the monitor on the other end handle turning it into something that can be displayed. For LCDs, this is great and much more straight forward than making a VGA reader, for CRTs this is more unnatural (VGA was made the way it is because of how CRTs operate mechanically).
Now how do you write code which changes the memory to change the color of the pixel at a given point on the screen? Well, you have to write the driver for the video card to do that, each device is going to have a vastly different setup. The operating system defines a standard interface for all video card driver writers to conform to (In fact, there is even a standard Driver interface the firmware is supposed to deal with... but we will ignore that for now) and that is what programmers eventually program against.
In some/many operating systems, it isn't possible for a userland application to modify the colors at any point in the screen. Rather, the users application is only allowed to modify the colors of the screen real estate given to it by the operating system. It modifies on an x/y axis what it wants and the OS in turn puts that color where it needs to be on the screen in relation to the where the window exist.
Now, because operating systems are vastly different on the way they allow userland apps to interact with their windows, we have yet another layer (or two) that is usually added to help programmers write software that can run in multiple places. Those come in the forms of the SDL for managing windows, input devices, and the os event system, or OpenGL for standard 3d rendering interactions.
Now where does Java fit in? It is generally on top of all of that. Java has a standard interaction with the windowing system (swing, JavaFx) that it allows programmers to program against, it is up to the guys who implement the java platform for the various OSes to make sure those features reliably translate into window creation/pixel mutation, etc for the programmer.
In short, it is a big complex ball of wax that gets ever more complex as time goes on. Today, it is a non-trivial task to write a program which can change the color of any pixel on the screen (without having an OS, drivers, etc, which allow you to do just that).