Things have changed.
The architecture of Aurea Monitor (Actional) has not received major changes for a long time.
With the release of version 11.x this a new architecture got introduced.
You might be used to this:
- An (Actional) Agent = A Node in Management Server = A monitored server
- Communication (Agent / Management-Server) is synchronous using HTTP
- Intermediary can run standalone or with a Management Server
- Agent/Intermediary configuration local to each node
Ten years ago the above was absolutely ok. Nowadays with more dynamic environments, hybrid environments (cloud & on-premise), and last but not least containerization (Docker, Kubernetes) these concepts are not sufficient anymore.
Let's look at some details of the new architecture:
This concept has been broken up. An Agent now supports monitoring more than one server/container.
The monitored system can define which Agent to report the traffic to.
The Agent will forward the received events to the server with the information of the origin (= monitored server).
The server keeps track of all the reported/known endpoints (hostnames, addresses) of a monitored server and links it to a node.
In a dynamic/containerized environment these endpoints might change. Nevertheless, you might want to ensure that Aurea Monitor always treats them the same (ignoring their unique container/machine identifier).
In other words, you want to avoid a new node for every additional machine/VM/container that is brought up.
In 11.x+ parameters exist to configure this and ensure a single node representation.
Agent /Management-Server Communication
The communication between the Agent and the Management Server is now asynchronous. Instead of HTTP, the architecture was changed to use
JMS (e.g. Aurea Messenger) instead.
Only topics and temporary queues are used for communication. This way the required JMS server configuration is kept to a minimum.
It is no longer possible to have a standalone Intermediary without a Management Server.
A Management Server is mandatory for each Aurea Monitor environment. All the configuration is provisioned from the server to each Intermediary instance.
The previous paragraphs left out one important aspect of the communication and the architecture.
The concept of the Launcher.
The Launcher is what is initially started and not the Agent/Intermediary. This is a key component of the new architecture.
On startup, it connects to the Management Server via HTTP ( this is the only communication part that still requires HTTP) and fetches a configuration profile.
Example configuration.json which defines server location and profile:
The profile contains the start command configuration (e.g. JVM arguments), product configuration (e.g. service groups, transports), and also the binaries (e.g. Jetty, Agent).
Once everything is downloaded, to the deploy folder inside the launcher folder (e.g. C:\laucher\deploy\), the Launcher starts the configured product e.g. the Agent using the active deploy artifacts (e.g. C:\laucher\deploy\active).
A working directory is created and used by the launched product to store all product-specific configuration files and logs (e.g. C:\laucher\working\logs).
As a best practice you should no longer do any manual configuration via the Agent/Intermediary web interfaces in production but rely on the profile configuration.
Centralized management of configuration profiles also allows you to easily upgrade everything from the server.
Simply assign a new product version (e.g. a new release of Intermediary) to the profile and remotely restart the launcher.
The ability to remotely control all the launched process from the server makes it easy to roll out fixes and new versions.
Many of you might already plan to migrate due to the upcoming end of life of Flash. Remind you, only 11.x+ will receive a de-flash.
Therefore installation, migration/upgrade to the new version will be the topic in one of the upcoming posts.