Most modern enterprise software runs on Java. At the heart of many businesses sit ERP and commerce platforms built on Java that manage orders, track inventory, run fulfillment, and integrate with external systems like eCommerce storefronts and shipping carriers.
Apache OFBiz is one such platform. It is an open-source enterprise framework that powers order management, inventory control, warehouse operations, and fulfillment. In a typical production deployment, where an Apache OFBiz is deployed as OMS & WMS, its doing many things at the same time: downloading orders from eCommerce or any sales channel, validating them through business rules, syncing inventory across dozens of facilities, processing shipments in batches, and responding to API calls and webhooks. These tasks run continuously, often in parallel, throughout the day.
In small environments, all of this works smoothly. But in large scale production systems, data grows every day, multiple services compete for the same memory, database connections, and threads, and if the application code is not written carefully, the system can run out of memory and crash.
This article explains what out-of-memory means, why it happens in Apache OFBiz, which everyday workflows are most likely to trigger it, and what patterns and practices prevent it.
What Is a System Out-of-Memory Condition?
When a Java based application like Apache OFBiz starts, the JVM (Java Virtual Machine) is given a fixed amount of memory called the heap. Think of the heap as a workspace. Every object the application creates during its work, whether it is an order record, an inventory update, or the contents of a file, takes up space in this workspace.
Java has a built-in cleanup process called the garbage collector. It regularly scans for objects that are no longer being used and frees their memory. This is how the system stays healthy: objects are created, used, and then cleaned up automatically.
When Does the System Run Out?
A system out-of-memory condition occurs when the application needs more resources than are available. This happens in two ways:
- Heap exhaustion – Too many objects are alive in memory at the same time. The garbage collector cannot free enough space because the objects are still being used. New requests fail because there is no room for them.
- Resource exhaustion – The application has used up critical operating system resources like database connections or file handles, and has not released them. Even if the heap has space, the system cannot function.
In Apache OFBiz, both lead to the same outcome: the Service Engine stops processing jobs, the Entity Engine cannot run database queries, scheduled services stop firing, and users see a frozen or crashed system.
Heap Exhaustion vs. Resource Exhaustion
This is an important distinction. Out-of-memory does not always mean the heap is full. Sometimes the system crashes because database connections or file handles have leaked, not because of memory. The heap might still have space, but the application is stuck because it cannot talk to the database or open files.
Both conditions are equally dangerous in production. Heap exhaustion is more visible because the JVM throws an explicit error. Resource exhaustion is harder to diagnose because the system simply stops responding without a clear error message.
What causes Out-of-Memory in Apache OFBiz?
Out-of-memory problems almost never show up during development. A developer working locally has a small database, no background jobs running, and no concurrent users. Everything works fine.
The problems appear in production, where transactional data has been growing for months, multiple services run in parallel, and integrations push data continuously.
Fetching Too Much Data at Once
This is the most common cause. It happens when a service queries the database and loads all the results into memory at the same time.
|
Example: Inventory Sync to eCommerce Platform Your system has 1 million inventory records that need to be sent to eCommerce Platform. If your service uses delegator.findList() to fetch all 1 million records at once, all of them sit in memory as GenericValue objects until processing finishes. If your heap is 2 GB, those records alone might consume most of it. Now if two or three similar services run at the same time, the system runs out of memory and crashes. |
The same code works perfectly in development where there are only 50 records. The logic is correct. The output is correct. But at production scale, it breaks.
Reading or Writing Large Files in One Go
Apache OFBiz implementations often deal with CSV and JSON files for imports, exports, and integrations. These files tend to grow over time as catalogs expand and data accumulates.
When a service reads an entire file into memory at once, all that content sits in the heap. If the service also uses a BufferedWriter to prepare output, the same data ends up in memory twice: once as the source and once in the write buffer. That doubles the memory usage.
Heavy Database Queries
The Apache OFBiz Entity Engine makes it easy to query the database. But easy does not always mean efficient. Queries that join too many tables, fetch columns that are not needed, or lack proper filter conditions pull more data than necessary.
A query that runs fine against 10,000 orders will behave very differently against 500,000 orders. The database works harder, and the application uses more memory to hold all the results.
Not Releasing Resources Properly
When your Apache OFBiz service reads a file, opens a database connection, or calls an external API, it borrows a resource from the operating system. When the work is done, that resource must be returned. If it is not, the resource stays locked. The operating system thinks the application is still using it. The application has forgotten about it. Nobody can use it.
|
Example: Database Connection Leak Suppose your Apache OFBiz system has 40 database connections configured. A service opens a connection but does not close it properly. Under heavy load, the garbage collector cleans up the Java object, but the database connection stays open. The database thinks Apache OFBiz is still using it. Apache OFBiz does not know the connection exists. Do this 40 times, and all connections are locked. The database refuses new connections. Apache OFBiz cannot do anything, even though the heap has plenty of memory. |
This is why Java introduced the try-with-resources pattern. When you open a resource inside a try-with-resources block, Java guarantees it will be closed when the block finishes, even if something goes wrong.
Too Many Asynchronous Services at Once
Apache OFBiz's Service Engine lets you run services asynchronously, meaning they run in the background without making the caller wait. This is useful for tasks like sending emails or pushing notifications.
But each async service uses a thread, memory, and processing power. If you trigger thousands of async services at once, for example one notification per order during a bulk import of 10,000 orders, the Service Engine gets overwhelmed. The job queue fills up. Threads are all busy. CPU and memory spike. The system slows down for everyone.
|
Apache OFBiz Is Not a Message Queue Systems like Amazon SQS or Google Pub/Sub are designed to absorb millions of messages. Apache OFBiz is not. When external systems send thousands of webhooks or API calls at the same time, Apache OFBiz tries to process each one immediately, creating a thread for each request until resources run out. For high-volume integrations, use an external queue to absorb the traffic, and let Apache OFBiz read from the queue at its own pace. |
Understanding What Grows and What Stays Static
One of the most important things an Apache OFBiz developer needs to understand is which entities grow over time and which ones stay small. This tells you whether you need an iterator or can safely load all records at once.
|
Grows Over Time → Use Iterator |
Stays Small → Safe to Load All |
|
OrderHeader, OrderItem |
Enumeration, EnumerationType |
|
Shipment, ShipmentItem |
StatusItem, StatusType |
|
InventoryItem, InventoryItemDetail |
ProductType, ProductCategory |
|
Product (in large catalogs) |
Facility, FacilityType |
|
Order Payment Preference |
OrderType, SalesChannel |
|
ShipmentRouteSegment |
Carrier, ShipmentMethodType |
The rule is simple: if more records get added as the business operates (new orders every day, new shipments, new inventory movements), use an iterator. If the data is set up once and rarely changes (like facility types or status definitions), you can load it all.
|
Think of It This Way An OrderType entity might have 5 records: Sales Order, Purchase Order, etc. It will probably never have more than 10. Loading all of them is fine. But an OrderHeader entity could have 50 records today and 500,000 records a year from now. Loading all of them at once will eventually crash the system. |
What Apache OFBiz Development Framework Offers To Address These Issues?
Apache OFBiz provides specific tools and patterns to handle each of the conditions described above. These are not theoretical suggestions. They are the patterns that production-scale OFBiz deployments rely on daily.
Use EntityListIterator for Large Data
Instead of delegator.findList(), which loads everything into memory at once, use EntityListIterator. The iterator fetches records one at a time or in small groups. After you process a record, the garbage collector can reclaim its memory.
Even if your query matches 1 million records, only a few are in memory at any given time. The rest stay in the database, waiting to be fetched when needed.
|
When to Use an Iterator Use an iterator whenever you work with entities that grow over time: orders, shipments, inventory records, order items. You do not need an iterator for reference data like enumerations, status types, or facility types, because these have a small, fixed number of records. |
Read and Write Files Using Streams
Instead of loading an entire CSV or JSON file into memory, read it line by line using a BufferedReader or similar stream-based approach. For writing, collect a batch of records in memory (say 1,000 rows), write them to the file, then continue with the next batch.
This way, the full file is never in memory at once. Your service can handle files of any size without running out of memory.
Always Close Resources with Try-With-Resources
Whenever your service opens a file, creates an IO stream, or obtains a connection outside the Entity Engine's normal flow, wrap it in a try-with-resources block. This guarantees the resource is released when the work is done, preventing connection leaks and file handle exhaustion.
Use Scheduled Batch Services Instead of Mass Async
Instead of firing an async service for every single record, create a scheduled service that runs at regular intervals (say, every 5 minutes). This service fetches eligible records using an iterator and processes them one by one or in small batches.
This gives you steady, predictable resource usage instead of sudden spikes that overwhelm the system.
Use External Queues for High-Volume Integrations
For integrations that receive a high volume of incoming requests (like Shopify order webhooks during a sale), use an external queue such as Amazon SQS or EventBridge. The external system pushes messages to the queue, and your Apache OFBiz service reads from the queue at a controlled pace.
This protects OFBiz from being overwhelmed by traffic it was not designed to absorb directly.
Cache the Right Data
Apache OFBiz has a built-in entity cache that stores frequently accessed data in memory so the database does not need to be queried every time. But caching must be applied selectively:
- Safe to cache – Product names, facility details, enumerations, status types, order types. These change rarely and caching them saves database load without any risk.
- Never cache – Orders, shipments, inventory counts, or any data that changes frequently and affects business decisions.
Why? Because if you read an order from cache, it might be stale. Two items could have been cancelled since the cache was last updated. If your fulfillment service reads the cached version, it could ship items that should not have been shipped. That is a business error, not just a technical one.
Designing for Production Scale
The biggest trap for developers is building services that work perfectly in development but fail in production. The code is correct, the logic is sound, but the assumptions about data volume are wrong.
Key Considerations for Production
- Data grows every day. Orders, shipments, and inventory records accumulate continuously.
- Multiple services run at the same time: scheduled jobs, integrations, user requests, and batch processes.
- All of these compete for the same heap memory, database connections, and threads.
Every time you write a service that reads from the database, ask yourself: "What if this returns half a million records?" If the answer makes you uncomfortable, use an iterator.
Infrastructure Tips
- Set the JVM heap size based on real production workloads, not development defaults.
- Separate heavy batch jobs from real-time application processing when possible.
- Monitor heap usage, thread counts, and database connection pool usage in production.
- Configure the Apache OFBiz Service Engine thread pool based on actual load, not guesses.
Conclusion
The JVM does not run out of memory on its own. It runs out of memory because of how the application uses it.
In Apache OFBiz, the tools to prevent this are already available:
- EntityListIterator for processing large datasets without loading everything into memory.
- Try-with-resources for ensuring files, streams, and connections are always closed properly.
- Entity cache for reference data that rarely changes, so the database is not queried repeatedly.
- Scheduled batch services instead of mass async execution, for steady and predictable resource usage.
- External queues like Amazon SQS for absorbing high-volume integration traffic.
These are not advanced optimizations. They are foundational patterns that should be part of every Apache OFBiz service from the start. Write your code as if it will run against production data from day one, because eventually, it will.

