Search
Calendar
July 2025
S M T W T F S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Archives

Posts Tagged ‘JMS’

PostHeaderIcon Problem: Spring JMS MessageListener Stuck / Not Receiving Messages

Scenario

A Spring Boot application using ActiveMQ with @JmsListener suddenly stops receiving messages after running for a while. No errors in logs, and the queue keeps growing, but the consumers seem idle.

Setup

@JmsListener(destination = "myQueue", concurrency = "5-10") public void processMessage(String message) { log.info("Received: {}", message); }
  • ActiveMQConnectionFactory was used.

  • The queue (myQueue) was filling up.

  • Restarting the app temporarily fixed the issue.


Investigation

  1. Checked ActiveMQ Monitoring (Web Console)

    • Messages were enqueued but not dequeued.

    • Consumers were still active, but not processing.

  2. Thread Dump Analysis

    • Found that listener threads were stuck in a waiting state.

    • The problem only occurred under high load.

  3. Checked JMS Acknowledgment Mode

    • Default AUTO_ACKNOWLEDGE was used.

    • Suspected an issue with message acknowledgment.

  4. Enabled Debug Logging

    • Added:

      logging.level.org.springframework.jms=DEBUG
    • Found repeated logs like:

      JmsListenerEndpointContainer#0-1 received message, but no further processing
    • This hinted at connection issues.

  5. Tested with a Different Message Broker

    • Using Artemis JMS instead of ActiveMQ resolved the issue.

    • Indicated that it was broker-specific.


Root Cause

ActiveMQ’s TCP connection was silently dropped, but the JMS client did not detect it.

  • When the connection is lost, DefaultMessageListenerContainer doesn’t always recover properly.

  • ActiveMQ does not always notify clients of broken connections.

  • No exceptions were thrown because the connection was technically “alive” but non-functional.


Fix

  1. Enabled keepAlive in ActiveMQ connection

    ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(); factory.setUseKeepAlive(true); factory.setOptimizeAcknowledge(true); return factory;
  2. Forced Reconnection with Exception Listener

    • Implemented:

      factory.setExceptionListener(exception -> { log.error("JMS Exception occurred, reconnecting...", exception); restartJmsListener(); });
    • This ensured that if a connection was dropped, the listener restarted.

  3. Switched to DefaultJmsListenerContainerFactory with DMLC

    • SimpleMessageListenerContainer was less reliable in handling reconnections.

    • New Configuration:

      @Bean public DefaultJmsListenerContainerFactory jmsListenerContainerFactory( ConnectionFactory connectionFactory) { DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(); factory.setConnectionFactory(connectionFactory); factory.setSessionTransacted(true); factory.setErrorHandler(t -> log.error("JMS Listener error", t)); return factory; }

Final Outcome

✅ After applying these fixes, the issue never reoccurred.
🚀 The app remained stable even under high load.


Key Takeaways

  • Silent disconnections in ActiveMQ can cause message listeners to hang.

  • Enable keepAlive and optimizeAcknowledge for reliable connections.

  • Use DefaultJmsListenerContainerFactory with DMLC instead of SMLC.

  • Implement an ExceptionListener to restart the JMS connection if necessary.

 

PostHeaderIcon When WebLogic always routes on the same node of the cluster…

Case

Since a couple of days I have met the following issue on my WebLogic server: one application is deployed on a cluster, which references two nodes. Load-balancing (in Round-Robin) is activated for JMS dispatching.

  • Yet, all JMS messages are received only by one node (let’s say “the first”), none is received by the other (let’s say “the second”).
  • When the 1st node falls, the 2nd receives the messages.
  • When the 1st node is started up again, the 2nd keeps on receving the messages.
  • When the 2nd node falls, the 1st receives the messages
  • and so on

Fix

In WebLogic console go to JMS Modules. In the table of resources, select the connection factory. Then go to the tab Configuration and Load Balance. Uncheck “Server Affinity Enabled“.

Now it should work.

Many thanks to Jeffrey A. West for his help via Twitter.

PostHeaderIcon BEA / JMSExceptions 045101

Case

I used a RuntimeTest to send a JMS message on a WebLogic application, with native WebLogic hosting of the queues, distributed queues to me more accurate. The test was OK.
I clustered the application. When I execute the same test, I get the following error:

[JMSExceptions:045101]The destination name passed to createTopic or createQueue "JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE" is invalid. If the destination name does not contain a "/" character then it must be the name of a distributed destination that is available in the cluster to which the client is attached. If it does contain a "/" character then the string before the "/" must be the name of a JMSServer or a ".". The string after the "/" is the name of a the desired destination. If the "./" version of the string is used then any destination with the given name on the local WLS server will be returned.

Fix

Since the message error is rather explicit, I tried to add a slash ('/') or a dot ('.') or both ('./'), but none worked.
To fix the issue, you have to prefix the queue name with the JMS module name and an exclamation mark ('!'), in the RuntimeTest configuration file, eg replace:

<property name="defaultDestinationName" value="JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE"/>

with:

<property name="defaultDestinationName" value="JmsWeblogicNatureModule!JONATHAN_LALOU_JMS_DISTRIBUTED_QUEUE"/>

PostHeaderIcon Mule / MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE

Case

I have a Mule workflow, of which outbound is a <jms:outbound-endpoint>. The destination queue is hosted on MQ Series and accessed through WebLogic 10.3.3 bridge.

I get the following error:

MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE

Complete Stacktrace

2010-11-03 13:03:11,421 ERROR mule.DefaultExceptionStrategy       - Caught exception in Exception Strategy: MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
javax.jms.JMSException: MQJMS3000: failed to create a temporary queue from SYSTEM.DEFAULT.MODEL.QUEUE
 at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:644)
 at com.ibm.mq.jms.MQConnection.createTemporaryQueue(MQConnection.java:2958)
 at com.ibm.mq.jms.MQSession.createTemporaryQueue(MQSession.java:4650)
 at com.ibm.mq.jms.MQQueueSession.createTemporaryQueue(MQQueueSession.java:286)
 at org.mule.transport.jms.Jms11Support.createTemporaryDestination(Jms11Support.java:247)
 at org.mule.transport.jms.JmsMessageDispatcher.getReplyToDestination(JmsMessageDispatcher.java:483)
 at org.mule.transport.jms.JmsMessageDispatcher.dispatchMessage(JmsMessageDispatcher.java:171)
 at org.mule.transport.jms.JmsMessageDispatcher.doDispatch(JmsMessageDispatcher.java:73)
 at org.mule.transport.AbstractMessageDispatcher$Worker.run(AbstractMessageDispatcher.java:262)
 at org.mule.work.WorkerContext.run(WorkerContext.java:310)
 at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1061)
 at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:575)
 at java.lang.Thread.run(Thread.java:619)

Explanation

A similar issue is described here on Mule support forum. Richard Swart wrote:

This not really mule specific error but an MQ authorization error. The QueueSession.createTemporaryQueue method needs access to the model queue that is defined in the QueueConnectionFactory temporaryModel field (by default this is SYSTEM.DEFAULT.MODEL.QUEUE).

Quick Fix

To fix the issue: on MQ server side, grant visibility to client applications on the default SYSTEM.DEFAULT.MODEL.QUEUE

PostHeaderIcon Tutorial: from an application, make a clustered application, within WebLogic 10

Abstract

You have a non-clustered installation, on the host with DNS name jonathanDevDesktop, with an admin (port: 7001), a muletier (port: 7003) and a webtier (port: 7005) instances.
You need set your muletier as a clustered installation, with two nodes, on the same server. The second node will dedeployed on port 7007.

We assume you have a configured JMS Modules (in our case: JmsMqModule, even though the bridge between WebLogic and MQ has no impact here).

Process

Batches

  • Copy $DOMAINS\jonathanApplication\start-muletier-server.bat" as $DOMAINS\jonathanApplication\start-muletier-server-2.bat"
  • Edit it:
    • Possibly, modify the debug port (usually: 5006)
    • Replace the line
      call "%DOMAIN_HOME%\bin\startManagedWebLogic.cmd" muletier t3://jonathanDevDesktop:7001

      with

      call "%DOMAIN_HOME%\bin\startManagedWebLogic.cmd" muletier2 t3://jonathanDevDesktop:7001

Second Node Creation

  • Following points are not required.
    • Copy the folder %DOMAIN_HOME%\servers\muletier as %DOMAIN_HOME%\servers\muletier2
    • Delete the folders %DOMAIN_HOME%\servers\muletier2\cache and %DOMAIN_HOME%\servers\muletier2\logs
  • Stop the server muletier
  • On WebLogic console:
    • Servers > New > Server Name: muletier2, Server Listen Port: 7007 > Check Yes, create a new cluster for this server. > Next
    • Name: jonathanApplication.cluster.muletier > Messaging Mode: Multicast, Multicast Address: 239.235.0.4, Multicast Port:5777
    • Clusters > jonathanApplication.cluster.muletier > Configuration > Servers > Select a server: muletier
    • Clusters > jonathanApplication.cluster.muletier > Configuration > Servers > Select a server: muletier2
  • Start the instances of muletier and muletier2 in MS-DOS consoles.
  • On the WebLogic console:
    • Deployments > jonathanApplication-web (the mule instance) > Targets > check “jonathanApplication.cluster.muletier” and “All servers in the cluster” > Save
  • On the muletier2 DOS console, you can see the application is deployed.

JMS Configuration

The deployment of JMS on clustered environment is a little tricky.

  • On WebLogic console: JMS Modules > JmsMqModule > Targets > check “jonathanApplication.cluster.muletier” and “All servers in the cluster
  • Even though it is not required, restart your muletiers. Then you can send messages either on port 7003 or 7007, they will be popped and handled the same way.