My company recently started installing IO modules to production machines to capture data to our database. To do so we outsourced a 3rd party programmer to create an IO pulling software to draw data from IO module to a database in a collector PC.
The IO pulling software is running on the collector PC and will continue to draw data from machines into its local database for as long as the software is left open. No data will come in if the collector PC is shut down or the software is turned off.
The program works well but there was a bit of an oversight. If a machine gets unplugged or disconnected from the network (connection made via Ethernet cables), an unhandled exception error will appear for the pulling software for that particular machine.
Clearly its due to there being no connection and clicking Continue will solve this problem. But I'm afraid it will affect the quality of the data captured. Also our contract with the programmer who made this expired, so I'm the one who has to fix it.
The program was done in VB and I was thinking of just adding a procedure into the code to automatically bypass this error or at least tell the program to automatically select Continue and move on without human intervention. As I'm new to programming I'm not sure if my approach is even possible. Any ideas?
Related
We are currently testing an upgrade from CF11 to CF2018 for my company's intranet. To give you an idea how long this site has been running, our first version of CF was 3.1! It is still using application.cfm, and there is code from 1998, when I started writing this thing. Yes, 21 years -- I'm astonished, too. It is a hodgepodge of all kinds of older frameworks, too, including Fusebox.
Anyway, we're running Win 2012 VM connected to a SQL 2016 farm. Everything looked OK initially, but in the Week I've been testing, the server has come to a slowdown once (a page took more than 5 seconds to run, something that usually takes 100ms, no DB involvement), and another time, the server came to a grinding halt. The only way I could restart CF App service was by connecting to the server with another server via Services, because doing it via Remote Desktop was so slow.
Now keep in mind -- it's just me testing. This is a site that doesn't have a ton of users, but still, having 5 concurrent connections is normal and there are upwards of 200-400 users hitting this thing every day.
I have FusionReactor running on this thing now, so the next time a lockup happens, I will be able to take a closer look, but what do you think is the best way I can test this? Our site is mostly transactional, users going and filling out forms to put internal orders through. We also connect to XML web services and REST services; we also provide REST services, too. Obviously there's no way to completely replicate a production server's requests onto a test server, but I need to do more thorough testing. Any advice would be hugely appreciated.
I realize your focus for now is trying to recreate the problem on test. That may not be as easy as hoped. Instead, you should be able to understand and resolve it in production. FusionReactor can help, but the answer may well be in the cf logs.
You don't mention assessing the logs at the time of the hangup. See especially the coldfusion-error log, for outofmemory conditions.
You mention raising the heap, but the problem may be with the metaspace instead. If so, consider simply removing the maxmetaspace setting in the jvm args. That may be the sole and likely cause of such new and unexpected outages.
Or if it's not, and there's nothing in the logs at the time, THEN do consider FR. Does IT show anything happening at the time?
If not then consider a need to tune the cf/web server connector. I assume you're using iis. How many sites do you have? And how many connectors (folders in the cf config/wsconfig folder)? What are the settings in their workers.properties file? Are they optimized for the number of sites using that connector?
Also, have you updated cf2018? Are there any errors in the update error log? Did you update the web server connector also?
Are you running the cf2018 pmt (performance monitoring tool set)? Have you updated it?
There could be still more to consider, but let's see how it goes with those. I have blog posts on these and many more topics that would elaborate on things, both at my site (carehart.org) and the Adobe cf portal (coldfusion.adobe.com).
But let's hear if any of this gets you going.
We have a few applications which are running in Windows 2K, 2008 servers. They are written in java.
These applications needs to do many automation tasks. We are having difficulty to monitor these applications. Sometime due to XYZ reasons application either hangs or fail to perform desired job. We only come to know about this after a few days when some one reports that desired function hasn't been executed.
To come out of this issue, we added emails for each imp exceptions but then developer needs to spend time to check those 1000 emails everyday. Which is again not feasible & efficient solution.
Now we are looking for a alert, alarms, notification display & monitoring system. We need to have a remote application which can receive alarms from these java applications & then based on certain information/Condition/Configuration, remote application can display some red, orange, green text on the screen. Based on red text, users can be visually see that there is an issue in the system. If required users can be notified that there is a serious issue in the application.
Please help us to identify any existing mechanism, tool, package to achieve this goal. Any suggestion would be highly appreciated.
Thanks
There are myriad ways to achieve this, but all of them will require some effort. Which way to proceed depends on your needs and abilities. A couple of options occur to me:
Have your processes log their exceptions to a Syslog daemon, running on some central server. Then you could have an admin read through the log file for serious problems, but there are many ways to post-process syslog messages, a web search on it might give some more hints.
Is there any way, when logged into the server, to observe whether or not the process is running properly or not? You could install something like Nagios on a sever, and write a plugin that monitors your particular process on all the servers. The plugin can basically be a shell script that checks "ps", or a log file, or whatever you want.
If you are in an IT department, your organization might already have some system like this (NMS).
I'm not sure why this question is tagged "snmp", but it's technically possible to install an SNMP agent on each server, and have them send traps on certain conditions. I do think it would be slightly overkill because you would also have to get a good SNMP manager to receive the traps and alert a sysadmin.
I would go for a combination of the check_logfiles plugin to parse log exceptions and raise alerts, and check_jmx/jmxquery to check metrics inside the JVM such as heap usage and thread count.
check_logfiles
check_jmx
So I have been working on a client/server application written in java. At the moment I am looking for a way to verify that the code of the client application has not be changed and then recompiled. I've been searching Google for some time without a lot of success. An educated guess would be to generate a hash value of the client's code during runtime, send it to the server and compare it with a database database entry or a variable. However I am not sure if that is the right way or even how to generate a hash of the codebase during execution in a secure way? Any suggestions would be greatly appreciated.
What would stop the nefarious user from simply having the client send the correct checksum to the server each time? Nothing.
There is currently no way to completely ensure that software running on a client computer is not running altered software. It's simply not possible to trust their software without asserting control over their hardware and software. Unfortunately, this is a situation where you should focus on software features and quality, something that benefits all users, rather than preventing a few users from hacking your software.
Other than the possible lag issues, has anyone tried this? What are the pros or cons associated with this?
A lot of times for me it's the limitations of the remote desktop connection, be it VNC or RDP or whatever. For examples:
My workstation has two monitors. Remotely viewing my workstation reduces it to one.
Lag is tolerable in the IDE, but not with anything image-heavy. Everything from photoshopping to web browsing is done locally, not on the remote machine.
Adding to #2, when splitting up tasks between the local and remote machine, there's that extra layer of getting the two to play nice together that adds just a little bit of overhead per task, which adds up to a lot overall. Something as simple as saving a file from the web browser and opening it in the IDE takes more steps.
(I may think of more and add them later.)
All in all, it's fine if the setup can be adjusted properly. In my experience, the companies I've worked for have defined their remote connection capabilities by the needs of someone other than the software developers, and thus leave us with little pet peeves that make the process just slightly more difficult than it needs to be.
Here is my take on it from my experiences
PROS: Single dev environment, only need to license one set of tools (if applicable)
CONS: The lag got the best of me. Typing to only have it show up 1 - 3 seconds later...sometimes, other times works great. In VS, the popup notifications sometimes take forever to display as well. Other cons would include if you have to share your desktop with another employee and possible moving files to/from the dev machine as RDP does not natively allow you to drag/drop files.
same as other posters - lag when using tools that affect screen painting for vstudio (resharper,coderush) is a real problem - some stuff involving the mouse (dragging grid columns) is very difficult to use
I'd add that about every 10-15 times when I go to log back in on the physical workstation at work, it takes the stupid thing about 2 minutes to finally succeed in refreshing the displays
One of our current milestones on our (open source) project at the moment is to complete USB support, and as such we're working hard on drivers at the moment. Our current development focuses on EHCI on both x86 and ARM (OMAP35xx SoC specifically, EHCI-only in the silicon of the board). We have mostly everything running smoothly in a variety of emulators - VMware (free and non-free versions), QEMU, and VirtualBox.
When we do testing on real hardware however, we get absolutely nowhere. The basic routine for device enumeration in our system goes something like this:
Turn on port power (if the option is available) and wait for power to stabilise to the device
Perform a port reset (held for 50 ms) and then wait as long as needed for the reset to complete (while loop)
If the port has a device present, and is enabled, notify the system that a new USB device is available for initialisation.
Send the SET ADDRESS command to assign an address to the device. This is where we run into problems everywhere:
The SETUP transaction for this command completes without error
The zero-length IN transaction (status phase) throws a transaction error, halts the qTD, and disables the port.
Our timing delays are basically the same as Linux's driver (if anything, longer).
According to the USB 2.0 specification, this behaviour is a "Port Error" (section 11.8) but to be completely honest I don't see how to translate its description of a port error into a working solution for our driver. As we are an open source project we also don't have the money to go out and purchase a proper hardware USB protocol analyser to investigate exactly what's going on on the line either.
Has anyone faced a similar problem and knows a solution?
We have identified the cause of this problem has been a timing issue, but in our case the issue was too much of a delay.
By modifying our qTD/QH creation code to create a single QH with multiple linked qTDs associated with it, we've been able to get successful runs on physical hardware.
We also had to use te EHCI 64-bit data structures, which had not been implemented previously.