Developing over VPN connection on a virtual desktop - development-environment

Other than the possible lag issues, has anyone tried this? What are the pros or cons associated with this?

A lot of times for me it's the limitations of the remote desktop connection, be it VNC or RDP or whatever. For examples:
My workstation has two monitors. Remotely viewing my workstation reduces it to one.
Lag is tolerable in the IDE, but not with anything image-heavy. Everything from photoshopping to web browsing is done locally, not on the remote machine.
Adding to #2, when splitting up tasks between the local and remote machine, there's that extra layer of getting the two to play nice together that adds just a little bit of overhead per task, which adds up to a lot overall. Something as simple as saving a file from the web browser and opening it in the IDE takes more steps.
(I may think of more and add them later.)
All in all, it's fine if the setup can be adjusted properly. In my experience, the companies I've worked for have defined their remote connection capabilities by the needs of someone other than the software developers, and thus leave us with little pet peeves that make the process just slightly more difficult than it needs to be.

Here is my take on it from my experiences
PROS: Single dev environment, only need to license one set of tools (if applicable)
CONS: The lag got the best of me. Typing to only have it show up 1 - 3 seconds later...sometimes, other times works great. In VS, the popup notifications sometimes take forever to display as well. Other cons would include if you have to share your desktop with another employee and possible moving files to/from the dev machine as RDP does not natively allow you to drag/drop files.

same as other posters - lag when using tools that affect screen painting for vstudio (resharper,coderush) is a real problem - some stuff involving the mouse (dragging grid columns) is very difficult to use
I'd add that about every 10-15 times when I go to log back in on the physical workstation at work, it takes the stupid thing about 2 minutes to finally succeed in refreshing the displays

Related

ColdFusion 11 to 2018 Upgrade -- Server Locking Up, How to Test Better?

We are currently testing an upgrade from CF11 to CF2018 for my company's intranet. To give you an idea how long this site has been running, our first version of CF was 3.1! It is still using application.cfm, and there is code from 1998, when I started writing this thing. Yes, 21 years -- I'm astonished, too. It is a hodgepodge of all kinds of older frameworks, too, including Fusebox.
Anyway, we're running Win 2012 VM connected to a SQL 2016 farm. Everything looked OK initially, but in the Week I've been testing, the server has come to a slowdown once (a page took more than 5 seconds to run, something that usually takes 100ms, no DB involvement), and another time, the server came to a grinding halt. The only way I could restart CF App service was by connecting to the server with another server via Services, because doing it via Remote Desktop was so slow.
Now keep in mind -- it's just me testing. This is a site that doesn't have a ton of users, but still, having 5 concurrent connections is normal and there are upwards of 200-400 users hitting this thing every day.
I have FusionReactor running on this thing now, so the next time a lockup happens, I will be able to take a closer look, but what do you think is the best way I can test this? Our site is mostly transactional, users going and filling out forms to put internal orders through. We also connect to XML web services and REST services; we also provide REST services, too. Obviously there's no way to completely replicate a production server's requests onto a test server, but I need to do more thorough testing. Any advice would be hugely appreciated.
I realize your focus for now is trying to recreate the problem on test. That may not be as easy as hoped. Instead, you should be able to understand and resolve it in production. FusionReactor can help, but the answer may well be in the cf logs.
You don't mention assessing the logs at the time of the hangup. See especially the coldfusion-error log, for outofmemory conditions.
You mention raising the heap, but the problem may be with the metaspace instead. If so, consider simply removing the maxmetaspace setting in the jvm args. That may be the sole and likely cause of such new and unexpected outages.
Or if it's not, and there's nothing in the logs at the time, THEN do consider FR. Does IT show anything happening at the time?
If not then consider a need to tune the cf/web server connector. I assume you're using iis. How many sites do you have? And how many connectors (folders in the cf config/wsconfig folder)? What are the settings in their workers.properties file? Are they optimized for the number of sites using that connector?
Also, have you updated cf2018? Are there any errors in the update error log? Did you update the web server connector also?
Are you running the cf2018 pmt (performance monitoring tool set)? Have you updated it?
There could be still more to consider, but let's see how it goes with those. I have blog posts on these and many more topics that would elaborate on things, both at my site (carehart.org) and the Adobe cf portal (coldfusion.adobe.com).
But let's hear if any of this gets you going.

Microstrategy Developer too slow

I am trying to connect to an MSTR intelligent server in Seattle from MSTR Developer running on my laptop connected in Bangalore. It takes an average of 10+ seconds for any action I do on the developer, like, login or open folders or open a report or anything. It is almost impractical to do any report development this way (not to mention the frustration).
When my colleague connects to the same instance/project from Seattle he doesn’t face any delays. So I figure that this is a network issue and doesn’t have much to do with the metadata or indexes. The network response time to the box is 30ms and 300ms average from Seattle and Bangalore respectively. I found online that 280ms is average response time from India to US. Accessing the reports and projects via the web interface is smooth though.
Have you ever experienced a situation like this before? Can the network delays cause that much trouble on MicroStrategy? Please help…
PS: This question is not quite a fit for SO. But I guess that MSTR
developers face this problem normally and may be they know a fix.
Hence posting this here rather than SU or somewhere else.
This is a pretty common problem, in my experience. I believe that MicroStrategy's network traffic is XML based, so network bandwidth as well as latency is an issue.
Usually, the web server is more responsive because:
It is performing "simpler" tasks that Developer
The network-intensive traffic is between I-Server and web server, so if they're colocated, performance will be reasonable.
I'm afraid I've never come across an effective solution to this issue. Having a "jump server" in the same data centre as the MSTR servers, with the Developer software installed, is usually the most tolerable solution (provided Remote Desktop isn't too laggy).
Same solution here : we have developpers VMs on a host in the same datacenter as the server, and we remote desktop them. From there, we use Developper/object manager, etc
You can still do 90% of the tasks in web.

Stress testing a desktop app system

If I want to stress test a 'classic' client-server (desktop app <-> LAN <-> database server) Windows Forms desktop application to see how it performs when many concurrent PC users are using it, how should I go about it? I want to simulate many PC users concurrently going through a work flow, to see if it all stands up and at what point the system degrades unacceptably. I've looked at many test tools but they all seems to be skewed toward testing functionality or web app performance, which is quite different.
Clearly having many actual people on actual PCs is not practical, and lots of virtual machines on a few PCs is not representative either. 'Cloud' computing (EC2, Azure etc) looks promising but the documentation and pricing information all seems to be skewed towards mobile apps or web servers, again not the same (but that could just be presentation so I remain open to the idea). I need to be able to virtualise a small LAN of many client machines running the application and a database server.
Can anyone suggest how to do this, or recommend something?
TIA
IMHO the real question is - do you really need to do performance testing in your case? Consider this - where is your business and functional logic?
Performance testing of Desktop applications is oxymoron by itself. Desktop application is made to be used by one person at a time. So if getting a response takes 5 seconds, it will take (pretty much) 5 seconds no matter how many users are clicking the button. The only real thing close to your backend is the DB and they by design support serious asynchronous load. In case this is not enough - just make a cluster.

AX 2009 Code Propagation with Load Balancing

I'm curious how AX 2009 handles code propagation when operating in a load balanced environment.
We have recently converted our AX server infrastructure from a single AOS instance to 3 AOS instances, one of which is a dedicated load balancer (effectively 2 user-facing servers). All share the same application files and database. Since then, we have had one user who has been having trouble receiving code updates made to the system. The changes generally take a few days before they can see it, and the changes don't seem to update all at once.
For example, a value was added to an ENUM field, and they were not able to see it on a form where it was used (though others connected to the same instance were). Now, this user can see the field in the dropdown as expected, but when connected to one of the instances it will not flow onto a report as it should. When connected to the other instance it works fine, and for any other user connected to either instance it works properly.
I'm not certain if this is related to the infrastructure changes, but it does seem odd that only one user is experiencing it. My understanding was that with this setup, code changes would propagate across the servers either immediately (due to sharing the Application Files), or at least in a reasonable amount of time (<1 day). Is this correct or have I been misinformed?
As your cache problems seems to be per user, then go learn about AUC files.
The files are store on the client computer and can be tricky to keep in sync. There are other problems as well.
Start AX by a script, delete the AUC file before starting AX.
There is no cache coherency between AOS instances: import an XPO on one AOS server, and it is not visible on the other. You will either have to flush the cache manually or restart the other AOS. The simplest thing is to import on each server, this is especially true for labels, as this is the only way to bring labels in sync to my knowledge.
I am sort of curious to this as well, but what I do know, is that if a user has access to the AOT (member of admin or a group with developer access), the client will cache AOT-elements more aggressively than if not having developer access.
Elements (like an Enum) might be cached at client level, but also at AOS-level. Restarting the AOS (service) would flush out memory for that service, forcing it to reload elements upon restart.
I guess what I am suggesting is that you make sure the element is not cached client side. Either restart the client, or run the "Refresh AOD" from the developer tools menu. If that doesn't help, try restaring the AOS the client connects to, and see if that helps.
I think it is safe to say, if you want to be absolutely sure every user has the most recent "copy" of any element, you should not develop on the application files shared by all of these services, but rather develop in an environment with 1 AOS. And when you need to move things to production, you need to take down all AOSes in production and move the chances over while the system is down.
In such cases it is often difficult to find the exact cause for a specific case.
I try to follow some best practices to avoid such situations:
- Use separate environment for developing
- Deploy code changes using layer files, not XPOs
- When deploying, stop all AOSs, deploy files, delete index files in the application directory, start one AOSs, compile, sync DB, start other AOS (or even shut down all and start again)
- Try to have latest kernel versions for AOSs and client

How can you test your web server speed?

Our website seems to be slower than it used to be, how can I test that? And is there a way to find the cause? (eg too many visitors).
Thanks.
There is a rather good tool for performance benchmarking of web servers: Jakarta Jmeter, which is an Apache project, so it's rather well supported and tested.
The key to be able to pinpoint the cause would be to do benchmarking regularly, so you can actually match changes in your benchmark results with events on your server: upgrades, code changes, variations in the number of visitors...
The Firebug add on for Firefox has a Net tab which is useful for debugging issues and testing. Also Fiddler on Windows is nice. And then there is the age old tradition of checking your server error logs for any problems.
A good first step is to make sure you are keeping fairly complete server logs and feed them into a log analyser. This is helpful for giving you a general idea of how long things take and which pages are slowest. It's also a good idea to check your error logs to make sure things are working properly.
Beyond that, things get more complicated as you may need to isolate your webserver, code and database to see if one of these is the bottleneck. Also, Jeff's blog, coding horror had a recent entry on server optimization.
Use Google Analytics to track your site's visitors over time to find out if you are getting more traffic.
You tagged your question with shared-hosting - being on a shared host means that someone else's code running on the same machine as your may be affecting your site's performance.
I'd suggest going with Varkhan and apphacker's suggestion to make sure your site's code is reasonably quick. use Analytics to get some stats and the possibly, depending on how many visitors you are getting and how slow the site is, consider moving away from a shared host.
Try the server speed checker at Bitcatcha.com. The tool ping your website server and record the time needed to get a response. It also pings from 8 different nodes to your server. You are able to at least find out whether it's your server that is slowing your website.