Data Center Real-User Monitoring (DCRUM), discontinuedFormerly Dynatrace Network Application Monitoring (NAM)
Overview
What is Data Center Real-User Monitoring (DCRUM), discontinued?
Data Center Real-User Monitoring (DCRUM), also known as Dynatrace Network Application Monitoring (NAM), was an application monitoring solution focusing on user experience, with an emphasis on how the network – especially the WAN – influences user experience. It is a…
DCRUM, legacy and traditional datacenter monitoring for enterprise applications.
DCRUM - Application insight into the darkest corners of your DC.
Dynatrace Data Center Real-User Monitoring (DCRUM) enables quick insight into the status and performance of all transaction based …
Pricing
What is Data Center Real-User Monitoring (DCRUM), discontinued?
Data Center Real-User Monitoring (DCRUM), also known as Dynatrace Network Application Monitoring (NAM), was an application monitoring solution focusing on user experience, with an emphasis on how the network – especially the WAN – influences user experience. It is a legacy product from Dynatrace,…
Entry-level set up fee?
- No setup fee
Offerings
- Free Trial
- Free/Freemium Version
- Premium Consulting/Integration Services
Would you like us to let the vendor know that you want pricing?
1 person also want pricing
Alternatives Pricing
What is Splunk Real User Monitoring (RUM)?
Splunk Infrastructure Real User Monitoring (RUM) enables monitoring of any stack, on-prem, hybrid, and multiclou
What is ScienceLogic SL1?
ScienceLogic is a system and application monitoring and performance management platform. ScienceLogic collects and aggregates data across and IT ecosystems and contextualizes it for actionable insights with the SL1 product offering.
Product Details
- About
- Tech Details
What is Data Center Real-User Monitoring (DCRUM), discontinued?
Data Center Real-User Monitoring (DCRUM), discontinued Technical Details
Operating Systems | Unspecified |
---|---|
Mobile Application | No |
Comparisons
Compare with
Reviews and Ratings
(27)Attribute Ratings
Reviews
(1-3 of 3)Much better than it used to be, but still in need of improvements
- Dynatrace Network Application Monitoring (NAM), formerly DCRUM, is very useful to find the network packet flow across applications
- It shows the application behavior in terms of end-user performance
- It integrates very well with other Dynatrace components
- Dynatrace as a whole has a lot to improve in network management
- While it provides a lot of data, it is not always comprehensive; it still needs some manual digging into the issues
- Application monitoring
- 70%7.0
- Database monitoring
- 60%6.0
- Threshold alerts
- 70%7.0
- Predictive capabilities
- 80%8.0
- Application performance management console
- 80%8.0
- Collaboration tools
- 80%8.0
- Out-of-the box templates to monitor applications
- 70%7.0
- Application dependency mapping and thresholding
- 80%8.0
- Virtualization monitoring
- 60%6.0
- Server availability and performance monitoring
- 70%7.0
- Server usage monitoring and capacity forecasting
- 70%7.0
- IT Asset Discovery
- 70%7.0
- Dynatrace Network Application Monitoring (NAM), formerly DCRUM, helped us get a lot of info on an ongoing issue.
- It helped us figure out network bottlenecks in our environment.
- It is a very costly tool and a lot of other cheaper tools give same kind of info.
- Dynatrace DCRUM can monitor legacy application protocols that are still used in a lot of organizations worldwide who still trust in those technologies.
- DCRUM monitors client-server architectures very well and can pinpoint issues along an infrastructure stack.
- Dynatrace DCRUM can analyze a wide spectre of protocols: Corba, DNS, DB2, Exchange, TCP, HTTP, IBM MQ, Citrix, ICMP, Informix, Tuxedo, SMB, LDAP, MSRPC, MySQL, NetFlow, Net8, Oracle Forms, RMI, SAP GUI, SAP HANA, SAP RFC, SMB, SOAP, XML.
- Its configuration requires a lot of technical and business knowledge to drive monitoring expectations to dashboards.
- DCRUM needs a robust monitoring architecture to store, analyze and visualize all collected data.
- DCRUM can't monitor the latest Microsoft Exchange versions.
- Application monitoring
- 100%10.0
- Database monitoring
- 100%10.0
- Threshold alerts
- 80%8.0
- Application performance management console
- 80%8.0
- Collaboration tools
- 10%1.0
- Out-of-the box templates to monitor applications
- 90%9.0
- Application dependency mapping and thresholding
- 50%5.0
- Virtualization monitoring
- 80%8.0
- Server availability and performance monitoring
- 90%9.0
Dynatrace Data Center Real-User Monitoring (DCRUM) enables quick insight into the status and performance of all transaction based applications and some others too, like Citrix and VoIP.
When things go wrong, a quick glance reveals what part of the application delivery chain isn't performing as it should. Automatic baselining of all traffic also helps separate out deviations from the norm, such as new releases of code going wrong or design flaws in the data flow. Many solutions are covered out of the box and the ability to support inhouse protocols for legacy applications is important.
- The ability to correlate Citrix users and their performance to an actual backend application.
- Full insight into SAP over all the protocols as well as the customized code makes DCRUM a nice augmention to Solution Manager.
- The high capacity to decode 20 Gbps traffic on a single box/probe makes it easy to slot in even in high density populated DCs.
- In an effort to help understand and decode unknown traffic the configuration wizard helps and suggests what things to use in the datastream, since hardly any customers have documentation that covers the network traffic/protocol.
- It's also very handy to be able to define specific traffic to/from a node as Software Services when they are a shared resource, such as a database, web farm or a DNS. That enables you to cherry pick from as many software services as you want when building the application delivery chain which makes up what a user perceives as "an application".
- High capacity netflow processing helps you to get insight into devices that might be out of reach for various reasons.
- One very convenient thing is the self-monitoring/maintenance of the system. Often the installations are left running for a very long time and there is comfort in knowing that they will maintain themselves with data rollup and cleaning tasks so you won't be met by a blank screen due to a full HD or the inability to restart without user intervention.
- Major upgrades process is sometimes unpredictable.
- The use of SQL Server should be evaluated for something else.
- Easier SSL key handling.
It works really well on all things based on HTTP and the specific decodes it supports. There are guides for configuring but they are not "hard" as you can tweak and twist the decode with RegExs .
For the other decodes included, the same applies. You also have full freedom to draw reports backwards and forwards as all data is exposed in the DMI (Data Mining Interface).
It also sports a nifty Smartphone interface where you can control what is being published in a role-based setup, making sure your managers only get what they ask for and are spared the techie details.
- Application monitoring
- 100%10.0
- Database monitoring
- 80%8.0
- Threshold alerts
- 100%10.0
- Application performance management console
- 100%10.0
- Collaboration tools
- 40%4.0
- Out-of-the box templates to monitor applications
- 100%10.0
- Application dependency mapping and thresholding
- 90%9.0
- Virtualization monitoring
- 90%9.0
- Server availability and performance monitoring
- 100%10.0
- IT Asset Discovery
- 30%3.0
- It has reduced the number of war rooms as well as the number of people involved to address issues.
- It helps in utilization trending for network capacity.
- It has prevented poor solutions from hitting production.
- When the various business units launch own initiatives such as third party tools or new platforms, it's become extremly easy to detect.
- The reports help in the field of continuous improvement as any changes are immediately discovered and can be compared to history. Deviations from what you have decided as a tolerance corridor can be used to trigger alarms, both positive and negative.
Yes and no. Serious bugs can be solved in a few hours with a personal patch while "grey zone" things or things of less impact might take more time.
These things can be how a decode interprets certain content. This has improved in the later releases though with the LUA based decode that are far easier to tweak than the older decodes.
Several times the Product Mangers has offered and taken over the dialogue directly with the stake holders to precisely understand why they request a particular function on report to cover a specific use case or business need.
Half of them time this leads to something improving in the product and the other half leads to a more educated stake holder :-)
- Any kind of network based statistics.
- Any kind of network based discovery.
- Application response time versus network response time.
- Device discovery and baselining.
- Customized reporting.
- The dependeny on a SQL database forces you to make decisions on what data to keep for a particular period to prevent DB growth in a bigger network.
- If there is no DAN (Data Acquisition Network) or previous experience of using TAP's or other means to get to the packets, the initial setup can be timeconsuming, but this goes for all similar products relying on a copy of the network traffic.