An enterprise ready, low-code Git-based DevOps toolchain that enables all developers to connect, transform and modernize mainframe application with CI/CD automation, integrated developer environments (IDE), AI/ML-based application performance monitoring and visualization dashboards, and automated testing.
N/A
LoadRunner Professional
Score 8.6 out of 10
N/A
A solution simplifies performance load testing for colocated teams. With project-based capabilities, so teams can quickly identify abnormal application behavior.
I'm not sure what scenario it would not be suitable for. I have it up all day, though I do use the mainframe emulator to go back to the old 'green screen'. We have tools that we must use there. There is very little I can't do on the workbench. I'm trying to get some of our new developers to use it, as when I'm using it and talking to them on the phone, they don't know what I'm talking about.
Micro Focus LoadRunner and its suite of tools, specifically VuGen works wonderfully for us for all web, http/https and web service calls. We've been able to build tests for near any scenario we need with relative ease. As long as we have crafted up requirements for our scenarios / scripts to managed scope, we've had high success working with scripting and data driving. Our main tests are web service calls - typically chained together to form a full scenario with transactions measuring the journey or a similar (measure along the way) journey through a browser. For web services we will use VuGen and browser we've shifted to Tru Client I have had little-to-no experience scripting against a thick client where a ui-driven test would be required. I know its possible but quite costly due to the need to run the actual desktop client to drive tests. We've been fortunate enough to leverage http calls to represent client traffic.
We use CA Librarian here, and the workbench doesn't play well with that. I can't view the library my source is in; I have to copy it to a PDS.
I frequently make some copies of files on my own. I do each file at a time. The ability to create a macro to do all of them at once would be nice. Currently, I click the file and click 'copy to', then fill out the info for each file.
HP LoadRunner with new patches and releases sometimes makes no longer support older version of various protocols like Citrix, which makes the task time-consuming when using older versions of LoadRunner for some of the cases. So it should support older version as well while upgrading.
Configuring HP LoadRunner over the firewall involves lots of configuration and may be troublesome. So, there should be a script (power shell script for Windows or shell script for Linux users) to make it easy to use and with less pain.
I would like to see the RunTime Viewer of Vugen in HPLoadRunner based on the browser I selected in the run-time configuration to make it feel more realistic as a real user.
Licensing cost is very high when we need to perform a test on application for a specific group of users.
Support has been amazing compared to Optim. Further, new features are very regular with File-AID - I can't remember the last time Optim had a significant update. File-AID support is very receptive to feature requests and reported bugs, including sending out hotfixes quickly.
Customer service is not that great. It's difficult to get hold of someone if an issue is supposed to be addressed on an urgent basis. No online chat service readily available.
Optim is more user friendly in how it operates, in my opinion. It's less obtuse to figure out how to extract and mask the data required compared to File-AID. Further, Optim is easier to gather related tables, by far. I do prefer using File-AID via the Topaz GUI much more than using Optim via its GUI. Finally, I personally believe that File-AID is significantly faster to run than Optim - this could be a configuration issue.
I can debug (expeditor) much faster and more efficiently. In fact, I was asked yesterday to run their job through Workbench Expeditor. I can also view data movement much better.
Code analysis lets me give a quicker explanation of what a program may do, as it provides a graphical interface showing processing and data movement.
The scripts created with traditional web/http protocol are not robust thus re-scripting is required after most every code drop. Troubleshooting and fixing the issue takes more time therefore in most cases we do re-scripting to keep it simple and save time.
In ideal world you would rather spend more time doing testing than scripting in that case mostly you could use an Ajax TruClient protocol. This type of script will only fail when an object in the application is removed or changed completely. This way of scripting will save you more time and helps you maintain the scripts with less re-work effort on a release basis. On the long run you will have a better ROI when you use Ajax TruClient protocol for scripting.