how should slow diagnostics be handled in VS Code API direct implementation - vscode-extensions

The diagnostics in my VS Code extension, implemented directly (no language server), are blocking other things (e.g. completions) while they run. Furthermore, it seems, if they are triggered again before the prior analysis completes the delays are multiplied several-fold. As a result the experience can be very unpleasant on complex workspaces.
Is there a small example of an extension that does not use LSP, and that handles this well, the API documentation is very light on this point, and I don't find much info by searching. Or, can someone give a summary of a broad strategy for dealing with it. Again, the issue is diagnostics are blocking other services for too long.
Further notes:
In early attempts I tried simply invoking the diagnostics on every document change event, results very nice for small files, but disastrous for large ones.
Somewhat better results by using a timer to prevent diagnostics from running in rapid succession, i.e., do not run if another run was recently started. A run is forced on "document will save".
Briefly toyed with putting diagnostics in an asynchronous function, seemingly no improvement, but maybe did not spend enough time working at it. Should this be necessary? I kind of thought the API would take care of concurrent operations behind the scenes.

Related

IWantToRunWhenBusStartsAndStops not for production?

New to NServiceBus (4.7.5) and just implemented an NSB host.exe hosted service (implementing IWantToRunWhenBusStartsAndStops) that detects changes to database tables and notifies subscribing web apps by publishing events, e.g. "CustomerDataWasUpdatedEvent". In the future we will perform the actual update through messagehandlers receiving commands obviously, but at the moment this publishing service just polls the database etc.
It all works well, however, approaching production, I noticed that David Boike, in his latest edition of "Learning NServiceBus", states that classes implementing
IWantToRunWhenBusStartsAndStops are really mostly for development and rarely used in production. I set up my database change detection in the Start method and it works nicely, does anyone know why this is discouraged?
Here is the comment in the actual book:
https://books.google.se/books?id=rvpzBgAAQBAJ&pg=PA110&lpg=PA110&dq=nservicebus+iwanttorunwhenbusstartsandstops+in+production+david+boike&source=bl&ots=U6sNII0nm3&sig=qIXffOVFhcy-_3qDnSExRpwRlD4&hl=sv&sa=X&ei=lHWRVc2_BKrWywPB65fIBw&ved=0CBsQ6AEwAA#v=onepage&q=nservicebus%20iwanttorunwhenbusstartsandstops%20in%20production%20david%20boike&f=false
The actual quote is:
...it isn't common to have widespread use of in a production system.
Uncommon is not the same thing as discouraged.
That said I do think there is intent here by the author to highlight the fact that further up the page they assert that this is not a good place to be doing lots of coding, as an unhandled exception can cause the whole process to fail.
The author actually does go on to mention a possible use case for when you may want to load a resource(s) to do work within the handler.
Ok, maybe it's just this scenario we have that is a bit uncommon
Agreed - there is nothing fundamentally wrong with your approach. I recently did the same thing as you for wiring up SqlDependency to listen for database events and then publish a message as a result. In these scenarios there is literally nothing else you can do other than to use IWantToRunAtStatup.
Also, David himself often trawls the nservicebus tag, maybe he'll provide a more definitive answer than mine.
I'll copy the answer I gave in the Particular Software Google Group...
I'll quote myself directly here:
An implementation of IWantToRunWhenBusStartsAndStops is a great place to create a quick interface in order to test messages during debugging by allowing you to send messages based on the console input. Apart from this, it isn't common to have widespread use of them in a production system. One possible production use case will be to provision a resource needed by the endpoint at startup and then tear it down when the endpoint stops.
I think if I could add a little bit of emphasis it would be to "widespread use". I'm not trying to say you won't/can't have an IWantToRunWhenBusStartsAndStops in production code or that avoiding them is a best practice. I am trying to say that having a ton of them is probably a code smell.
Above that paragraph in the book, I warn about IWantToRunWhenBusStartsAndStops not having any ambient transactions or try/catch stuff going on. THAT is really the key part. If you end up throwing an exception in an IWantToRunWhenBusStartsAndStops, tyou can run into big problems. If you use something like a .NET Timer and then throw an exception, you can crash your process!
Let me tell you how I screwed up on this in my first-ever NServiceBus system. The system (still in use today, from what I hear) is responsible for ingesting more than 3000 RSS feeds (probably a lot more than that now) into a CMS. So processing each feed, breaking it up into items, resizing images, encoding attached video for mobile ... all those things were handled in NServiceBus message handlers, which was scaled out to multiple servers, and that was all fantastic.
The problem was the scheduler. I implemented that as an IWantToRunWhenBusStartsAndStops (well, actually IWantToRunAtStartup at that time) and it quickly turned into a mess. I kept the whole table worth of feed information in memory so that I could calculate when to fire off the next ProcessFeed command. I was using the .NET Timer class, and IIRC, I eventually had to use threading primitives like ManualResetEvent in order to coordinate the activity. And because I was using .NET Timer, if the scheduler threw an exception, that endpoint failed and had to restart. Lots of weird edge cases and it was always a quagmire of bugs. Plus, this was now a singleton "commander app" so while the feed/item processors could be scaled out, the scheduler could not.
As I got more experienced with NServiceBus, I realized that each feed should have been a saga, starting from a FeedCreated event, controlled through PauseProcessing and ResumeProcessing commands, using timeouts to control the next processing time, and finally (perhaps) ended via a FeedRemoved event. This would have been MUCH more straightforward and everything would have executed inside transactionally-controlled message handlers.
That experience led me to be a little bit distrustful/skeptical of IWantToRunWhenBusStartsAndStops. Not saying it's bad, just something to be aware of. Always be prepared to consider if what you're trying to do couldn't be better accomplished in another way.

UI automation best practices

We have developed some UI automation test cases. Currently we are executing those on application which is under development. As per our observation, during execution, majority of scripts are failing due to application related performance issues (like window did not load properly / window took more time than expected to load etc.)
So to avoid this, during execution, we are planning to check which step is failed and planning to re-execute the same again, to check if window is loaded properly and if yes continue execution. But I have feeling that due to this approach some of the application performance related issues may get masked and am not sure whether we should follow such approach or not.
I would like to know whether it can be count as a best practice.
If you implement some mechanism for re-trying the operation that just failed, you'll keep falling in holes because sometimes, a re-try is not possible due to the app being in an unexpected UI state, or similar things.
Usually, each application has an expected, and a worst, response time. Take that time and use it as the maximum timeout for playback configuration.
Always try to predict what should happen when, and script accordingly. Making your script tolerate unexpected UI states (like long delay, etc.) just makes your testing effort become more of an "passive" automation effort.
As a rather rude measure, you could design a recovery scenario that retries the operation at least once (or for a specific period of time). This could help you getting a "stable" playback without finding ou what timeouts to use.
But generally: If a windows takes too long to show up, it is a defect. If your timeout is too low, it is a bug -- in your test robot config. If it is not defined what "takes too long" means, get the performance requirements.
Thus: Fix accordingly.
That's my 2 (OK -- 3) cents :)
Not the "best" but working practice.
Scripts must be portable. From environment to environment (and we all know, that test environments are much slower than UAT/Pre-prod, or Production) - with minimal / zero effort on maintenance.
Therefore:
use synchronization
don't hard-code what can change
make scripts configurable from the outside of QTP IDE
With regards to the little piece of GUI Step Automation, here's a general heuristic and acronym to remember: SEED NATALI.
SEED NATALI acronym stands for the following.
Synchronize till object
Exists
Enabled
Displayed
verify Number of Arguments
verify Type of Arguments
Log test flow
Investigate any issues occurred
Thank you,
Albert Gareev
http://automation-beyond.com/
If the objective is to perform functional test than,
It would be helpful to define bench mark on the response time taken by the application in different Environment, For example, If you have an web application, the Max load time is defined as 20sec and for Other application it is 10 sec. Similarly Once you have a clear benchmark You are on the floor to catch the issues.
Please note while defining the benchmark of an application there are many criteria( like network bandwidth, Server Types) which needs to be taken into consideration while defining benchmark.
If you're adding the retries now for a phase in the application development where the performance isn't stable yet, you should make sure to remove them when the application stabilizes.
QTP is sufficient for testing the performance of desktop applications or client server applications for a single user, if you want to test the performance for many users on a client server applications (e.g. web) perhaps you should consider using a load testing tool like LoadRunner.

Testing fault tolerant code

I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs.
This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory:
Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test
Add code the application that will cause it to crash a certain know critical points
My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.
I think there are three ways to deal with this, if available I could suggest a comprehensive set of integration tests for these various pieces of code, using dependency injection or factory objects to produce broken actions during these integrations.
Secondly, running the application with random kill -9's, and disabling of network interfaces may be a good way to test these things.
I would also suggest testing file system failure. How you would do that depends on your OS, on Solaris or FreeBSD I would create a zfs file system in a file, and then rm the file while the application is running.
If you are using database code, then I would suggest testing failure of the database as well.
Another alternative to dependency injection, and probably the solution I would use, are interceptors, you can enable crash test interceptors in your code, these would know the state of the application and introduce the above listed failures at the correct time, or any others you may want to create. It would not require changes to your existing code, just some additional code to wrap it.
A possible answer to the first point is to multiply experiments with your external process so that probability to impact problematic parts of code is increased. Then you can analyze core dump file to determine where the code has actually crashed.
Another way is to increase observability and/or commandability by stubbing library or kernel calls, i.e., without modifying your application code.
You can find some resources on Fault Injection page of Wikipedia, in particular in Software Implemented Fault Injection section.
Your concern about fault injection is not a fundamental concern. You merely need a foolproof way to prevent such code ending up in deployment. One way to do so is by designing your fault injector as a debugger. I.e. the faults are injected by a process external to your process. This already provides a level of isolation. Furthermore, most OS'es provide some kind of access control which prevents debugging unless specifially enabled. In the most primitive form, it's by limiting it to root, on other operating systems it requires a specific "debug privilege". Naturally, on production nobody will have that, and thus your fault injector cannot even run on production.
Practially, the fault injector can set breakpoints at specific addresses, i.e. function or even line of code. You can then react to that, e.g. by terminating the process after a certain breakpoint is hit three times.
I was just about to write the same as Justin :)
The component I would suggest to replace during testing could be the logging component (if you have one, if not, I'd strongly suggest to implement one...). It's relatively easy to replace it with code that generates error and the logger usually gets enough information to know the current application state.
Also it seems to be feasible to make sure that the testing code doesn't go into production. I would discourage conditional compilation though but rather go with some configuration file to select the logging component.
Using "random" kills might help to detect errors but is not well suited for systematic testing because of its non-determinism. Therefore I wouldn't use it for automatic tests.

What methods do you use to test for scalability in web applications?

Our testing system is pretty rudimentary; fire up a browser, see if it works. Recently we ran into problems, found by our client, with our application where the number of users created a slow-down in the application. The application is basically a huge Word document with people editing their own versions all at the same time. Part of the problem came from not knowing how to test multiple instances at the same time. My partner and I thought about how to test this; one idea was to hire out an internet cafe and hire students for an hour to bang on the app.
What are other ways that people have tried to emulate concurrency in testing their web-based application? Most of the advice here is for specific methodology; I'm asking, how do you test it to make sure that it works?
If you have never checked out Selenium, then you need to. It will allow you to do automated web testing through the browser. Ok, so first problem solved.
Now ideally you could use that same script and load it up on a bunch of boxes and run them all at once to get some sort of load testing right? Luckily for you someone has already figured this out, although it is a paid service: Browser Mob. But, it looks like you were willing to spend a little money to do this anyway, and would probably net you better, more repeatable results.
We usually answer the question "can the web application do more than one thing at a time" by using JMeter to produce a simulated HTTP load on the web server.
I find that it helps to consider distinguish several different types of testing; concurrency (what happens when two events in the system collide), capacity (what happens when there are many overlapping requests), volume (what happens as data accumulates in the system)...
Huge general slow down, evidenced by response times that fall outside of the SLA, are usually related to capacity problems (with contention as a common cause) or volume (many users, much data, and the system gets slower over time). The former usually requires some sort of multi-threaded request stream; the latter you can usually manage by preloading the volume, and then measuring the response times experienced by a single user.
I generally find that separating the load generator from the actual measurement/instrumentation is a good idea. That can be as simple as having a black box over there to generate a typical load, and sitting here with a stop watch measuring the responsiveness of a typical use case.
JMeter http://jmeter.apache.org/

Unattended Processing - Application Automation

I'm looking for information [I hesitate to infer "Best Practices"] for Automating Applications. I'm specifically referring to replacing that which is predictably repeatable through traditional manual means [humans manipulating the GUI] with something that is scheduled by the User and performed "Automatically".
We use AutoIT internally for performing Automated Testing and have considered the same approach for providing Unattended Processing of our applications, but we're reluctant due to the possibility of the user "accidentally" interacting with the Application in parallel with the execution of a scheduled "automation" and therefore "breaking" the automation.
Shy of building in our own scheduler with known events and fixed arguments for controlling a predefined set of actions, what approaches should I evaluate/consider and which tools would be required?
Additional Information:
Some would refer to this capability as "Batch Processing" within the application context.
In general it is a hazardous practice to automate UIs. It can be a useful hack for a short term problem: I find myself using AutoHotKey to run some tedious tasks in some situations... but only if the task is not worthy of writing code to implement the change (i.e., a one time, 15 minute task).
Otherwise, you will likely suffer from inconsistent runs due to laggy response of some screens, inconsistent UIs, etc. Most applications have an API available, and not using it is going to be far more painful than acquiring and using it in 99% of cases.
In the unfortunate but possible situation that there is no UI and you are reduced to screen scraping/manipulating, a tool that performs automated testing is probably as good as you will get. It allows you to verify the state of the app (to some degree) and thus can build some safety nets in. Additionally, I would dedicate a workstation to this task... with the keyboard and mouse locked away from curious users. (A remote desktop or VNC style connection works well for this: you can kick off the process and disconnect, making it resistant to tampering.)
However, I would consider that approach only as a desperate last resort. Manipulating an API is far, far, far, far (did I get enough "fars" in there?) more sustainable.
If I understand correctly, you want to do automated processing using some tool that will execute a predefined list of actions in a given software system. This being different from automated testing.
I would urge you to avoid using tools meant for testing to perform processing. Many major software systems have public APIs you can use to perform actions without direct user interaction. This is a much more robust and reliable way to schedule automated processes. Contact the vendor of the software you are working with, sometimes the API's are available upon request.
Godeke and Dave are absolutely correct that, if available, the API is the best route. However, practically this is sometimes not possible, and you have to go the GUI automation route. In addition to the previously mentioned dedicated workstation(s) to run the automation, I recommend coding in some audit trails, so that it is easier to debug or backtrack if problems arise. Your batch processing automation should keep a detailed log of what records were processed, when they were processed and how they were processed. You should set it up so that the records themselves (in the native application) will reflect that it was updated/processed via automation. For example, if each record has an updateable notes/comments field, the automation should add text to this field like "Processed by automation user, 2009-02-25 10:05:11 AM, Account field changed from 'ABC123' to 'DEF456'" That way, the automated mods will be readily apparent to a user manually pulling up the record in the GUI.