Skip navigation

Users familiar with business process flow and terms of BPMN could relate what performer are meant for in a process. It can be users, queues, group, system(external system), adapters. Adapters are one of the most widely used performer types in real time flows because of its flexibility to integrate with external systems and adapt the task required at that point of the business flow.

           In layman terms, a Bizlogic adapters allows applications to invoke java classes residing in context of Bizlogic server which could be running on remote or local host, to perform a designated task. Adapter framework internally use reflection mechanism to inject the class and its performing method while that adapter workstep is being executed. This definition of adapter workstep is done during design time on BPM Studio(shown below):

 

 

Class name and method name are important regardless of developer availing generate code operation or not. Generate code option would generate a skeleton class with class name and performing method available, along with getters and setters for each dataslots that adapter workstep carry. A developer can merely not choose to check that option and write a class of his own with name and performing method same as specified in the dialog above and makes sure to deploy to correct location, so framework can has the accessibility on run time.

 

Types of Bizlogic Adapters:

 

There are two types of adapters basically, synchronous and asynchronous. Synchronous adapters are supposed to be executed synchronous to the flow, in other words, regardless of how much time it takes for the adapter class to complete execution Bizlogic or Bizsolo engine waits for the execution to finish.

         While on the other hand, asynchronous adapters are just the opposite and it means that when Bizlogic calls this adapter it does not wait for its class' execution to finish as against former kind and proceeds further with application flow. While adapter can finish with its own pace. This kind of adapter is recommended for long running tasks if any we need to perform. Asynchronous adapters can further be categorized into two sub categories, one without output dataslots and one with output dataslots. One without output dataslot is usual asynchronous type and do not need any additional care to construct(the adapter is completed and the application continues to the next workstep). While on the other hand one with output dataslots do behave more like synchronous adapter type because, to complete such adapter it has to call the function completeCallerWS(). The parameters are BizLogic session, process template name, workitem name, and a hashtable that contains dataslot values for you to update before completing the adapter.

           The adapter implementation typically uses Java Messaging Service (JMS) APIs and infrastructure. When an adapter is invoked, a JMS messsage gets sent to a JMS Queue. A Message Driven Bean (MDB) listening to this JMS Queue picks up the message, decodes it and executes the adapter

 

Framework categorizes adapters internally based upon what a developer chooses at design time:

 

 

for example, when a user selects an adapter to be "long running" it is internally categorized as enterprise adapter and destination where the messages for these adapters will be sent will be BLEnterpriseAdapterQueue. While on the other hand if user choose it to be not long running, it is categorized as process adapter and messages for these adapter are sent to BLAdapterQueue. There are dedicated MDBs listening to respective queues(mentioned above) to process messages coming in.

          Process adapter type can be further categorized to inline adapters. Inline adapters were designed to overcome the overall JMS infra usage of adapter framework. When an engine detects an adapter with "execute in the same thread" flag checked, it directly executes the adapter in the same thread instead of sending a JMS message. Thus, the adapter is executed using the same thread that completed the previous workstep, it is more efficient than usual message sending and consuming way of adapter framework.

 

Adapter Class Loading:

           Business manager application can reload latest version of adapter class dynamically from Bizlogic server without restarting the server. The Bizlogic server is configured to dynamically reload adapter classes from the SBM_HOME\ebmsapps directory defined in sbm.conf file as parameter sbm.application.home. So, when the resources specific to adapters are deployed it goes to sub directories corresponding to SBM_HOME\ebmsapps.

 

BizLogic allows loading class files from specific locations as well as from JAR files.

 

• Locations common to all applications

  • SBM_HOME\ebmsapps

  • SBM_HOME\ebmsapps\common\classes

  • SBM_HOME\ebmsapps\common\lib

• Locations specific to an application

  • SBM_HOME\ebmsapps\<application name>\lib

 

Basic essence of dynamic class loading(DCL) is, if there is a class that is already loaded, is modified, or is moved to a new location, then the class needs to be loaded again. The very check "if class is modified" is based on the last modified time of the class file. If the class is in a JAR file, then the check is based on the last modified time of the JAR file as well as the last modified time of the class file(We will talk about DCL in details on different post).

 

Troubleshooting adapter flow in Bizlogic engine:

In Savvion installation under conf directory the file bizlogic.conf has various debug keys that when enabled logs detailed information on operation type engine is currently executing. Key specific to external performer(adapters) is bizlogic.debug.ep, set this to true to get detailed info on adapter execution.

 

All supported application server has their own way of dealing with JMS related task, but we will take examples of most widely used application server among the prospects i.e IBM WebSphere Application Server(WAS).

 

WAS handles all JMS related task with its reserved thread pool named SIBJMSRAThreadPool.  An example log message from Bizlogic engine appears as below:

>> BizLogic | DEBUG | ejbServer | BizLogic | EPM.readDataFromMsg(): | SIBJMSRAThreadPool : 6 |#]

 

So, one should look into such threads specifically for details on adapters execution. But, again there are many other task within Savvion that are handled by JMS infra so this thread pool might have threads performing other operations too(for example tasks from service MDB daemon). Threads that involve com.savvion.sbm.bizlogic.server.EPManager operations are one of your interests if you are troubleshooting adapter threads in JVM.

 

Frequently encountered issues with adapters on production:

 

BLAdapter thread count reaching zero!:

 

This problem usually shows up in application server like Weblogic. Even though this count means that configured executable thread count for this pool has reached its limit, internally, Weblogic pushes all StandBy thread count as well to serve the request coming in. But, if an environment is frequently hitting this situation, it is always recommended to tune this pool and increase the thread count(for Weblogic Server).

                             Remember, increasing thread count is easy but it very tightly depends on system resources, for example CPU capacity, memory availability and if the adapters frequently does DB operations then involved datasource's pool size as well. For WAS this here is the default setting and how to increase it to custom level.

 

Adapter stuck in activated state!:

 

Most frequent and troubling problem in production scenario. Usually when a user complains that adapter is stuck in activated state, it means the state of adapter is active and would not transition to the next state(i.e completed) on its own. There are many possibilities resulting in this state, for example an adapter performing particular operation when activated and took more than the JTA timeout. This is not a normal behavior because and adapter execution is supposed to be as quick as execution of independent java client, hence it requires manual intervention to push flow forward.

             This problem is too dependent to environments and also kind of operation that adapter is supposed to perform. It is hard to replicate this behavior in house until such production load is generated and other factors that catalyzes are present. First approach that a level 1 support monitoring/maintaining application takes in such cases is, restart Bizlogic server and it works. If the requests are too much piled up some of these gets cleared up and then rest stagnates again at same state resulting in team taking multiple restart of Bizlogic server to clear entire request(waiting on this state)

              As you already know adapter uses JMS infra to complete its execution(sending>Receiving>Processing of messages). If an adapter is stuck in activated state that message sent on behalf of this adapter is gone and is either not processed or lost and this is why it stays in the same state forever until Bizlogic server is restarted. What happens behind the scene is when the Bizlogic server is restarted, it actually re-sends all messages for adapter that were in activated/suspended state while server was being stopped. This is called restarting external services internally. This pushes the activated adapter message once again to the queue and makes it available for MDBs to process it. It is tiring process and most of all not acceptable on most production environment(although this is only first aid that could be done until now for this problem).

       We could extract the essence of what happens during Bizlogic server restart and use it through a Bizlogic application to simulate restart. In other words, it is possible to extract the operation specific to external service that happens during restart and stuff it within Bizlogic adapter and run it as application within same environment. To go into further details, behind the scene Bizlogic engine calls >> BLControl.doRestartExternalServices(); to resend messages for already activate and suspended adapter. Now this call is specific to clustered environment and engines makes sure it checks for it( also controlled by parameter exposed in configuration file). It is very rare that this situation would show up on standalone environment, but if it does we might have to handle setting the param on the fly like >> BLUtil.self().setClustering(true);; to finally call doRestartExternalServices(). We will talk about that later if needed, but meanwhile after all we discussed lets compile it in short snippet and this is all we need to put in a performing method of an adapter of an Bizlogic application that can have just this adapter as a workstep:

 

System.out.println("Resuming external services");

BLControl.doRestartExternalServices();

System.out.println("Done Resuming external services");

 

If this application is deployed to the environment that run into mentioned problem, we can create instances of it to signal engine to restart external services the way it happens when Bizlogic server is restarted. It will avoid manual bouncing of servers to clear out the stuck adapter stack(refer).

 

Note: User can detect all adapter worksteps in activated status by executing the following query:

select process_instance_id, workstep_name from bizlogic_workstepinstance where type=107 and status =18;

 

Adapter suspended but the resume does not work!:

 

When the adapter execution fails and throws an exception to the caller, the BizLogic server captures that remote exception, logs the message and stack trace, suspends the Adapter workstep. BizLogic tries to re-invoke the adapter. After specified re-tries(configurable), BizLogic generates two events, EP_AFTERBREAK and W_SUSPENDED.

Then, according to the requirement, applications can use any of the events to take further action. EP_AFTERBREAK is fired to register that retry for executing adapter was done and it could be fixed within span retries.

          Bizlogic.log and EJBServer.log (SystemOut.log in WAS) is the best place to look for the reason of failure in such cases. If the logs are rolling over with high rate we can look up the event for the PI that has adapter failing and filter those events mentioned above on this adapter workstep. These events carry stack(reason) for failure. Any admin user having access to BPM Portal admin tab can browse to Audit event section and can filter with event, PT, PI etc.

I ended my previous blog post with the question: “Why is the consumer application not fast enough?“

This is a very common question. The root cause of slowness or increased response time is often not very obvious.

For the end user the system she/he interacts is slow. Reality though is that this is not always true.

 

Let me take you through an example and show what we can do to find the real root cause.

 

Thread dumps and logs are a starting point if the issue is not intermittent (and it is Java).

Simple take a series of thread dumps using “jstack -l <pid>” and pass this to the support team of the product in question.

For intermittent issues and/or heterogeneous environments (e.g. .net + Java) the root cause analysis is more complex.

 

 

The issue can be load, data(size/content) or environment related. The more backend systems involved in one request the more complex it becomes.

 

For this blog post I created an example to illustrate this.

A customer tries to access a web app and is experiencing long wait times after triggering a request.

 

 

The app is a ASP.net page which interacts with a REST API. The REST API itself is sending requests to a JMS queue.

From that queue an integration engine (CX Messenger, Sonic ESB) is picking up the message.

A business process flow is executed and then the response is sent back.

 

Sample Scenario

 

 

As you can see there are several systems involved. You might argue now that this is a constructed example.

I agree but reality is often even way more complex than this.

Which makes it often quite hard to understand all the reasons why the final response time is so high.

 

Back to the example. The customer experience happens at the web page, no matter what is done behind the scenes.

The user reported a wait time of circa ten seconds till the page is loaded. Network issues as potential cause have already been eliminated by the operations teams.

Normally the investigation would now start with logs of all involved  components and teams.

Different technology-stacks, different ops teams, potential collaboration issues, time consuming… etc.

 

 

This is where CX Monitor (Actional) can show one if its strengths.

It allows you to dig into past traffic/interactions using a date time picker or you work proactively (preferred) using policies.

A policy defines a certain rule/condition/target. If the condition is met (e.g response time > 3s) then an action can be triggered.

Typically this action is an alert inside CX Monitor (can be passed to other monitoring systems/dashboards) but can be anything you want.

 

Sample Policy:

 

Policy condition

 

Part of the alert is information about the interactions and (if requested) the data involved in the complete interaction flow.

The flow itself can be reviewed and drilled into. It shows the interactions between the systems and APIs.

 

 

Example Flow Map:

 

Flow map of interactions between systems at given point in time

 

 

 

For “slowness” root cause analysis I personally prefer a different view of the same data. The sequence table which is also part of the alert details in CX Monitor.

In our example it clearly shows us where the time is spent.

 

Example Sequence Table:

 

Sequence table showing time spent in each app

 

The ten seconds reported by the customer are confirmed by this. The sequence map shows that the time is spent in these components:

 

  1. 4 seconds in the aspx page before the REST call is made
  2. 2 seconds on the ESB business process on log file write
  3. 3 seconds on a JDBC/SQL call done by the app that is exposing the REST API
  4. 1 second again on this REST API app after the database call

 

Having this information promptly at hand can save a lot of time and gives valuable insights into monitored systems.

Monitoring can be done for a single application (e.g. Aurea CRM connector/interface/webservices) or a complete heterogeneous environment. Seamless view of interactions across technology stacks (.net, java, etc).

You can even add monitoring capabilities to your custom solution app.

 

You have access to all this via Aurea Unlimited which means no extra cost for your company.

 

Further reading:

Is it possible to monitor ACRM using CX Monitor (Actional)?

In an always-connected world, business needs to integrate a complex network of systems and applications to keep pace with their customer expectations.  These expectations continue to drive the surge of “big data” that is estimated to generate 2.5 exabytes (2.5Bn gigabytes) of information per day across the whole of the internet.   As more networks come online, organizations need a Middleware provider that can continue to supply all of the connectors needed to make these systems communicate without getting bogged-down in time-consuming custom implementations.

 

It was a crisp morning in the fall of 1968 when the NATO science committee met in the Stadt of Garmisch Germany.  The topic of this particular day, Middleware. From that point on it was recognized that the only way to make complex computer systems speak to each other was through the use of a Middleware layer.  While different systems have come and gone over the years Middleware has largely remained the same, continuing to offer the same types of integration capabilities regardless of OS, DB or application.

 

As we move into Middlewares 5th decade of existence Aurea is taking a fresh perspective on our CX platform to determine how it can continue to serve the interests of our global partners.  Enterprises are becoming more complex with applications, hosting and database usage expanding exponentially ever year. With this in mind, Aurea is positioning to stay at the head of the pack when it comes to ESB technology and connectors offered to our clients. Building upon our history of providing a rock-solid and fast ESB, Aurea has started to architect the future, looking at what truly matters to enterprises utilizing our Middleware services. 

 

As we continue to finalize on the vision of what an ESB looks like for the 21st century (microservices, connectors, services etc.) and beyond, the Aurea CX platform will continue to grow and adapt to the needs of our clients along with laying out a long-term roadmap to get us there.  Tentatively we will be bringing our vision to market early in 2020. We will continue to update you as we make progress and are excited to share this journey with you.

We are excited to announce the  Aurea Customer Experience Platform 2019.1 release is now available. This release provides improved quality, stability and enhanced speed.

 

CX Messenger

  • Improved performance, CX Messenger installer is now 32x faster
  • Resolved 15 bugs


CX Monitor

  • Resolved 17 bugs


CX Process

  • Resolved 50+ bugs

 

To access the latest versions along with app product documentation and release notes, visit the product library in the Aurea Support Portal.

 

If you have any questions, please contact Aurea Support or your account manager for more information. We appreciate your continued partnership.

This is really something we hear regularly in Aurea Support and in most cases flow control is the cause.

Many customers heard about flow control, but most are not fully aware about the details.

Some even consider it a product bug or limitation.

I understand that it can cause pain, but there is a reason for it, which is why I thought it is worth explaining it in more detail:

 

What is flow control?

In a messaging system you always have a producer and a consumer. Ideally the consumer is at least as fast at processing messages as the producer. In reality this is not always possible.

Reasons are spikes in load, outages on consumer side or simply not well designed architecture.

CX Messenger (Sonic) provides of course some buffers but once these are full, message processing is impacted.

By default the producer is simply blocked until space is available on broker side to take the next message.

This is what we call flow control.

 

Let’s get into a bit more detail on this per JMS messaging domain:

 

Point-to-Point (Queues)

Recap of PTP basics: n producers per queue, n consumers per queue allowed, only one of the consumers of the queue will get the message.

 

If the consumers are not fast enough (or disconnected) the broker will queue the messages per queue. Each queue has two configuration options, Save Threshold and Maximum Size.

The Maximum Size defines how many kilobytes of message data the queue can hold.

The Save Threshold defines how much of this date is kept in memory, rest goes to disk.

Once the Maximum Size is reached flow control kicks in.

 

Publish/Subscribe (Topics)

Recap of PubSub basics: n producers per topic, n consumers per topic allowed, each consumer of the topic will get the message.

 

If the consumers are not fast enough the broker will queue the messages per subscriber. Each subscriber has buffers which are configured globally in the broker properties.

Once the buffer (per subscriber) of one subscriber (of given topic/pattern) is full, flow control kicks in on the particular topic. This means the slowest subscriber defines/limits the message delivery rate to all subscribers.

To be clear: at that point all the other subscribers on that topic no longer get messages and the publisher is blocked.

(In case you were wondering, yes it is key to detect this guy to prevent flow control. We will get there soon.)

 

Can I avoid flow control?

Now that you know that there are limiting factors, questions might be:

 

     "How to avoid such situations?

     Or how can flow control be avoided at all?

     But is it really a bad thing?

     Does it even help in your architecture?"

 

CX Messenger JMS API allows you to disable it, which will then cause an exception on the message producer side once flow control would kick in.

In most architectures though you would not want to do that, but rather get to the bottom of the cause and act accordingly.

 

So how can you avoid/reduce flow control? As you might guess there is no simple answer to it. It all depends on the cause and is very specific to each implementation.

There are buffers and there is the pace at which messages are produced and consumed. These are the key factors that you have to look at.

 

e.g.

  • For PTP you can increase the number of consumers to ensure messages are consumed faster. A larger maximum queue size will help on spikes on messaging load, but will increase latency (messages might stay longer in the queue).
  • Similar to PTP you can increase buffers for PubSub, but again there is latency impact and also memory impact. In addition there is this magic switch called “Flow To Disk” which allows you to use the whole hard disk as buffer.

 

     “So I just enable that magic switch and all good, great!”

 

Wrong, let me stop your enthusiasm here for a moment.

I personally think Flow To Disk is the worst feature we have.

You wonder why?

The feature itself is great, but the way how it is often used is causing issues. It simply hides bad architecture and bad configuration. People tend to enable it by default. Do not want to invest in proper load tests and architectural/configuration changes. Then once all is stuck (e.g. disk full or memory reference buffer is full) Aurea Support is pulled in and is supposed to fix it.

At this stage though most projects are already live and cannot easily make major changes.

Hopefully this blog post helps you to not make the same mistake.

 

FlowToDisk notification:

 

Back to PubSub: Another option to avoid/reduce flow control is to use shared/grouped subscribers.

It will ensure that each message is only consumed once per shared group.

This allows you to have parallel processing of messages per group but only once per message.

 

How do I know what the cause of flow control is in my architecture?

I hope by now you are convinced that flow control is great and Flow to Disk has to be used with caution.

So the question is: how do you even know that you run into flow control?

 

To detect whether your current deployment is stuck due to flow control the quickest is to get a Java thread dump using  "jstack -l  <pid>".

Look for threads blocked within a 'Job.join' call inside a send or publish.  This indicates that the client is waiting to send a message to the broker and is most commonly due to flow control.

 

For example:

 

"JMS Session Delivery Thread" (TID:0x101E7D30, sys_thread_t:0x3DDDBE8, state:CW, native ID:0x1F9C) prio=5

    at java.lang.Object.wait(Native Method)

    at java.lang.Object.wait(Object.java(Compiled Code))

    at progress.message.zclient.Job.join((Compiled Code))

    at progress.message.zclient.Publication.join((Compiled Code))

    at progress.message.zclient.Session.publishInternal((Compiled Code))

    at progress.message.zclient.Session.publishInternal((Compiled Code))

    at progress.message.zclient.Session.publish((Compiled Code))

    at progress.message.zclient.Session.publish((Compiled Code))

    at progress.message.jimpl.MessageProducer.internalSend((Compiled Code))

    …

 

 

From a proactive monitoring perspective there are several options that the product offers.

Which of the options is best for you depends on product usage.

 

You can setup flow control related broker notifications. PubPause/SendPause notifications are the starting point.

There are additional notifications (e.g interbroker flow control) as well which you should make yourself familiar with.

These notifications may cause a lot of noise and rarely operations team really investigate these notifications.

Some advanced teams offload these to ElasticSearch for analytics. Of course the noise is less the better you configured the system.

These notifications allow you to identify which consumer is causing flow control. The details are available in the PubPause notification:

 

 

 

Note: PubPause/PubResume does not apply/work if you use a shard/group subscription!

     (SlowSubscriber and BackloggedSessionSkip is key here, see below)

 

Especially for PubSub the flow control monitoring has more options. In case you have enabled Flow To Disk the disk usage of the pubsub store and the memory usage of the Flow To Disk can be monitored.

There is another notification which helps to identify slow subscribers and especially (but not limited to) for shared subscribers this is super helpful: application.session.SlowSubscriber

 

 

If a message is stuck for a defined number of milliseconds at the front of the subscribers buffer a notification is generated.

This does not replace PubPause but it allows you to detect stuck messages even if no flow control kicked in (yet).

(for PTP the queue.messages.TimeInQueue notification is the best equivalent. It allows you to get notified if a message is pending for too long in a queue.)

 

Related to the slow subscriber monitoring there is another corner case where a shared subscriber might back up on one member of the group. Normally this would cause the whole group to be slowed down, but might not even cause flow control. In more recent releases this has been improved to favor the faster clients while distributing messages in a group.

 

A new notification application.session.BackloggedSessionSkip is raised to identify clients that are backing up.

 

 

Once you identified the consumer(s) that cause this the next question is: Why is the consumer application not fast enough?

 

The answer to that will be given in my next blog post.

 

 

 

References:

How can a thread dump be generated from a Sonic Container or Client?

Assessing Flow Control condition.

How to monitor subscribers to identify slow message consumption?

Slow shared subscriber impacts other subscribers in the group

Monitoring for flow control using the Sonic Management Console

What is Flow to Disk?

Under what condition a publisher might get flow controlled even though flow to disk is enabled?

Publisher flow controlled even though FlowToDisk is enabled.

Already registered in AureaWorks? Watch the video now: Webinar Recording: An Insider's Update on Aurea's CX Platform Solutions | For Customers

 

Not registered yet in AureaWorks - Aurea's customer community – but want immediate access to webinar recordings and other great content? It's free! Sign up here.

 



In the webinar you can hear how CX Platform is evolving to serve you even more effectively, including:

 

How Aurea's new Enterprise offering enables better performance, cloud deployments and automated monitoring

 

How to leverage Aurea's cloud solutions to simplify and reduce costs while maximizing your integration capabilities

 

 

 



To leave a comment, ask a question or subscribe to updates, make sure you register or log in.

Hello Customer Experience Platform community!

As you know, the GDPR for European customers comes into effect on May 25th, 2018 with regulations around data privacy requirements to protect your personal data. At Aurea, data protection is of critical importance so we are committed not only to our own compliance, but also helping our customers address the GDPR requirements that are relevant to our products and services.

To stay on top of Aurea communications regarding GDPR, CXP customers should follow the Aurea and GDPR Compliance space (Partners, please follow Aurea and GDPR Compliance - Partners).  If you are unsure how to follow a place, please see How to Follow Places in AureaWorks.

Just looking for the GDPR white paper for CXP customers? You can find them here:


Please do not ask questions regarding GDPR on this blog. If you have GDPR questions, please reach out to your Account Manager.

Happy New Year! Thank you for joining us here in AureaWorks, I'm looking forward to our continued conversation about the CX Platform products.

 

CX Platform was Aurea's first acquisition, and the basis upon which we founded the company. Today more than ever, these products and the customers who use them continue to inform our viewpoint on how we can grow our business through delivering exceptional customer experiences and expanding the capabilities of the products that we acquire. I'm excited about the progress that we've made and am even more excited about future.

 

Here are some key milestones that we’ve achieved:

  • We developed a Platinum Support offering for premium support that specifically addresses the needs of the vast majority of our customers, including  24/7 support, managed upgrades, architecture reviews, and dedicated resources to manage issues and keep customers apprised of progress and adherence to SLAs., which combines the strengths of our products into a unified platform. With the Enterprise Edition we focus on delivering new capabilities that meet current and future needs for integration technologies in a rapidly changing and increasingly digital economy.
  • We have created the CX Platform Enterprise Edition, which combines the strengths of our products into a unified platform. With the Enterprise Edition we focus on delivering new capabilities that meet current and future needs for integration technologies in a rapidly changing and increasingly digital economy.

We also know that there have been some challenges along the way and we continue to work hard to identify these and make course corrections.

  • We’ve spent considerable time working on making our support model both responsive and tailored to address the kinds of questions and issues that our diverse set of customers may have.
  • We have made great strides on our goal of “drop-in replacement” upgrades to eliminate the difficulty of taking advantage of the latest release without the major hurdle associated with migrations.
  • We plan to leverage this community to provide far more transparency and opportunities for input in the roadmap and release schedule.

 

Stayed tuned for the next installment of my blog where I begin to address the roadmap for CX Platform going into 2018.

 

I welcome any questions or comments you have about your CX Platform experience, feel free to engage in the conversation!

 

Curt

We are excited to announce the new release of Aurea CX Monitor - formerly Actional - release 2017 R4.  Available now, CX Monitor 2017 R4 addresses improvements and resolves issues for both Standard and Enterprise Editions:

 

  • Enhanced big data features for Enterprise Edition - we’ve enhanced the new, previously released big data features that allow you to capture massive amounts of event monitoring data; these high volume streaming data components are the same as those used by companies like Facebook, Netflix, and Twitter.
  • Issue Resolutions - we’ve  resolved issues for Standard and Enterprise Editions across CX Monitor.

 

To access and install the latest version of CX Monitor today, visit the product library in the Aurea Support Portal.

 

Log-in to the Aurea Support Portal Resource Center for all product documentation and release notes

We are excited to announce the new release of Aurea CX Messenger - formerly Sonic- release 2017 R4.  Available now, CX Messenger 2017 R4 addresses improvements and resolves issues for both Standard and Enterprise Editions:

 

  • Storing Discardable Messages - we’ve made changes to allow discardable messages to be stored in the topic database when flow to disk occurs.
  • Analytics Offloader Maven Archetype - we’ve created a new Analytics Offloader Maven Archetype to enable easy creation of the offloader plugin with Maven.
  • Issue Resolutions -  we’ve  resolved issues for Standard and Enterprise Editions across CX Messenger.

 

To access and install the latest version of CX Messenger today, visit the product library in the Aurea Support Portal.

 

Log-in to the Aurea Support Portal Resource Center for all product documentation and release notes

We are excited to announce the new release of Aurea CX Process - formerly Savvion - release 2017 R4.  Available now, CX Process 2017 R4 addresses improvements and resolves issues for both Standard and Enterprise Editions:

 

  • Additional Supported Platforms, for Standard and Enterprise Editions - we’ve updated and added several additional supported platforms for Browsers, Operating Systems, and databases.
  • Issue Resolutions -  we’ve resolved issues for Standard and Enterprise Editions across various CX Process modules including, BPM, Archiver, BizLogic, BizPulse, BizSolo, and more.

 

To access and install the latest version of CX Process today, visit the product library in the Aurea Support Portal.

 

Log-in to the Aurea Support Portal Resource Center for all product documentation and release notes.

Welcome to AureaWorks, we are so excited to have you!

 

Let me introduce myself, I'm the Senior Community Manager here along with Sarah O'Meara. If you have any questions about how this place works and where to find things, feel free to ask.

 

We'd love it if you could introduce yourselves in the comments below. Feel free to share:

 

  • Your name and company (optional)
  • What's your favorite thing about the Aurea products you use
  • What you hope to get out of this community

 

We look forward to getting to know you!

I’d like to take a moment to introduce Aurea customers to our new CX Platform community on AureaWorks, our Customer Engagement Community powered by our latest acquisition, Jive.

 

This is a community for customers that are leveraging CX Messenger (formerly Sonic), CX Monitor (formerly Actional), CX Process (formerly Savvion), and other platforms products such as DXSI and Intermediary.  It’s also a place for all of Aurea’s customers to learn and understand how they can leverage our world-class CX (Customer Experience) Platform products to enable better customer experiences in their business.

 

We've created this community as a place where Aurea can exchange information with our customers, collect feedback, build and share a comprehensive knowledgebase on the products, and connect you with experts across our organization and the customer base at large. 

 

I’m excited about the future of our CX Platform and encourage all of you to leverage this place as a means of learning and providing input and feedback to tailor this community to meet your needs.  It will also become a way to connect with your peers and to learn how to share information on the best ways to achieve success.  I look forward to robust conversations and the opportunity to extend our reach to a broader audience.

 

Welcome to the AureaWorks community!

 

Curt

Make sure to follow Customer Experience Platform (CXP) Community to your inbox and check back in for community updates! Not sure how to follow? Find out here: How to Follow Places in AureaWorks. Prefer to get updates to your email inbox? Find out how to change your email notifications here: Set-Up Email Notifications and Preferences.