We have an unattended app w/o a user interface that is is periodically run.
It is a VB.NET app. Instead of it being developed as a service, or a formless Windows application, it was developed with a form and all the code was placed in the form_load logic, with an "END" statement as the last line of code to terminate the program.
Other than producing a program that uses unneeded Windows form resources, is there a compelling reason to send this code back for rework to be changed to put the start up logic in a MAIN sub of a BAS file?
If the program is to enter and exit the mix (as opposed to running continuously) is there any point in making it a service?
If the app is developed with a Form do I have to worry about a dialog box being presented that no one will respond to even if there are no MessageBox commands in the app?
I recall there used to be something in VB6 where you could check an app as running unattended, presumably to avoid dialogs.
I don't know whether there are conditions where this will not run.
However, if the code was delivered by someone you will work with going forward, I would look at this as an opportunity to help them understand best practices (which this is not), and to help them understand that you expect best-practice code to be delivered.
First of all, you don't need it to be run in a Form.
Forms are there for Presentation, so it should not be done there.
If you don't want to mess with converting the application a Service (not difficult, but not very easy neither), you shoud create a Console Application, and then, schedule it with Windows Task Scheduler.
This way, you create a Console Application, with a Main function, that does exactly what you need.
Anyway, the programmer could show windows, so there should not be any messagebox. Any communication should be done via Logging to: local files, windows events, database.
If you want more information on any of them, ask me.
If you don't want it to be a service, nothing says that it has to be a windows service. Scheduling it to run via the Task Scheduler or something similar is a valid option.
However, it does sound like the developer should have choose a "Console App" project, instead of a "Windows Forms" project to create this app.
Send it back. The application is bulkier and slower than it needs to be, although that won't be much of an issue. It is somewhat more likely to run out of resources. But the main reason: converting it to a console app is very easy.
If you don't prefer for the Console window to popup, simply do the following.
Create a new class "Program.vb", add a public shared Main() method, and move the "OnLoad" logic from the form to this method.
Next delete the form, and change the project start up object (Available in the project properties window) to use the Program.Main instead of the Form.
This will have the same effect, without the windows forms resources being used. You can then remove the references to System.Windows.Form and System.Drawing.
Related
When I learned how to start NSApplications on my own, the code I used (based on here and here) did
[NSApp activateIgnoringOtherApps:YES];
which forces the app to the front at startup.
I'd like to know what most other apps do. I want to be able to run programs both directly from the binary and from an app bundle, and I'm not using Xcode to build this (raw building). So I'd rather this act naturally, so to speak.
The docs do say Finder issues NO, but... why Finder? Isn't this a method that's run from within the process, not outside? (I'm not in control of the choice.) And what about the Dock and other possible entry points?
I even went so far as to disassemble 10.8's NSApplicationMain() to see what it did, but as far as I can tell from the 32-bit version, unless this "light launch" thing issues this selector, this selector is never called.
Is there an answer to this question? Thanks... and sorry if this is confusing; I tried to word it as clearly as possible.
Apps normally do not call -activateIgnoringOtherApps: at all. And, generally speaking, shouldn't. Certainly, it wouldn't be in NSApplicationMain(), which is too early and fairly distantly related to actual app start-up.
Apps are normally launched by Launch Services (which is what is used by the Finder, the Dock, and /usr/bin/open, as well as any other app that might open yours or a document which yours handles). Roughly what happens is that Launch Services deactivates the app which called it to open something else and then, in the launched app, Cocoa's internals do something like (but not necessarily identical to) [NSApp activateIgnoringOtherApps:NO]. In this way, the launched app only activates if nothing else was activated in the interval between those two events. If that interval is long (because something was slow) and the user switched to something else in the meantime, you don't want to steal focus from whatever they switched to.
You should only call [NSApp activateIgnoringOtherApps:YES] in response to a user request to activate your app in a context which won't include the automatic deactivation of the current app by Launch Services. For example, if you have a command-line program which transforms itself into a GUI app (using -[NSApplication setActivationPolicy:] or the deprecated TransformProcessType()), then the user running that tool means they want it active. But Terminal is active and won't be deactivated spontaneously just by virtue of having run your program. So, the program has to steal focus.
If your program is a bundled app, then running it from the command line should be done with /usr/bin/open rather than directly executing the executable inside the bundle. Then, you don't need to call -activateIgnoringOtherApps: at all and the question of what value to pass is moot.
I've hit a bit of a roadblock, and I'm hoping someone can help!
I've written a metro application that serves as a unit test runner, and I now need to be able to call this application headlessly so that it can be used for validation in the build process. The way the metro app works is it runs a bunch of unit tests, generates an XML file that contains the test results, and displays the results to the user.
Ideally, I would have a simple script that would run the metro app, execute the tests, exit the app, and then have the ability to read the results in the generated XML file. Is this possible, and if so, what's the best way to do it?
Here are some more specific questions:
How can one start a metro app headlessly, and in the metro app is there a way to detect this so that it does not wait for user input?
Is it possible to access files within the package of a metro app from an outside process?
EDIT - A workaround would be to create a custom Visual Studio test runner and then find a way to run the tests automatically with each build. I know this can be done within the IDE, but I'm not sure if there's a way to do this with a script.
I imagine you've long since moved past this problem, but for the sake of anyone else looking to do this, I got it to work without too much hassle. To execute a Metro app in an automated/headless fashion, I wrote a simple desktop command-line utility that takes the name of a metro app and makes use of the IApplicationActivationManager interface to launch it. I can then call that utility from a script.
The second argument to that inteface's ActivateApplication method is a string that gets passed in to the activated app, kind of like command-line arguments. It shows up as the Arguments property of the LaunchActivatedEventArgs that is received by the app's OnLaunched handler. The default implementation of OnLaunched in the Visual Studio template projects passes this value to the MainPage when it first navigates to it, where it comes through into the OnNavigatedTo handler as the Parameter property of the NavigationEventArgs. You could catch it in whichever place is more convenient.
My launcher utility passes a hard-coded flag through there, as well as forwarding its own command-line arguments. That allows the top-level script to pass arbitrary data down into the Metro app. The app can use that data to realize that it's running headless and run its tests. It can spit out whatever kind of result data you like into one of its folders (like its LocalFolder), which a desktop app can then read from %LOCALAPPDATA%\Packages\APPNAME\LocalState. I setup my launcher utility to wait for the result files to appear after launching the app, and then use them to determine its own exit code. The launcher utility can't kill the app afterward, but the app can kill itself when it's done via CoreApplication.Exit.
That setup worked great for a while, but a problem that I'm running into now is that the app isn't always launched to the foreground, and the runtime will suspend/terminate the app after it hasn't been the foreground app for some amount of time (currently ~10-15 seconds). So any tests that take too long won't work with this approach, barring some workaround that I haven't discovered yet (which I was searching for when I came across this question).
I doubt you'll be able to do it.
It's the same sort of problem as trying to run a WPF app headlessly, but harder since you'd also have to deal with the Metro sandbox security model.
P.S. Happy to be proven wrong!
No, sorry. You hit a wall with your first requirement of a script that runs the Metro application in "headless" mode in the first place. Your second requirement would be your next wall. One application cannot see, let alone monitor, another application/thread/process. Then your third requirement is also impossible. Files inside an application are isolated. It sounds to me like you found a good candidate for a desktop app. Having said that, don't mistakenly think that you can't have a companion Metro application that is your dashboard. It's just the execution core can't be hosted inside the WinRT sandbox.
I am retrofitting unit testing into a fairly complex system designed and written by other developers in VB.net. I am trying to develop unit tests for the GUI forms using NUnit and the NUnit Forms extension. (I've been looking at c# examples that are fairly easy to port over if you have a solution but don't know VB syntax as long as it uses NUnit classes)
I will try and explain what I am doing but first a brief description of the program. It basically monitors server activity. You need to connect to a server via a modal form with IP and Port fields(amongst others). Once you have connected to a server other parts of the program unlock and become usable (such as configuration of the server).
Desired process: Load program > click connect button > modal connect form loads > enter details > click OK to connect > main form updates to logged-in state > other functionality
The problem is that I cannot test the functionality of the connect form and then the logged-in functionality of the program. I can test that it loads the modal connect form correctly; enters the details and clicks OK (all fine so-far) but it does not appear to logically progress the program. The modal form just closes again seemingly without running the connect code from the program back-end and I’m back at the main menu not logged in to anything.
I have a feeling that I’ve either missed something really obvious or that it’s simply not doable in NUnit. I have trawled the internet in search of anything similar but the closest was another SO thread that was really generic. Without being able to actually test the logged-in version of the program, I'm at a major hurdle.
Another issue is handling message boxes that don’t have unique identifiers (e.g. “are you sure you want to exit?”); these also seem to be a major pain in the arse with NUnit
(If it makes any difference, I’m running the tests as a stand-alone project using a reference to the executable file of the built project, not the actual source)
Can post some of my testing code if required.
IMHO the best approach to make GUI classes feasible for unit tests is to apply the Model-View-Presenter pattern and factor almost every program logic out of the form (=View) class to a separate Presenter class. Then you can unit test the Presenter class without the need for tools like "NUnit Forms".
Read Michael Feathers' article "The Humble Dialog Box" for an example in C++, you can easily apply that to Winforms, I guess.
I'm not sure about NUnit forms, but using the White library (which also works with NUnit), you're able to test the application by running the exe and mimicking user actions. The application runs normally so all application logic is performed.
Here's some example code for launching an app with White:
Dim app = White.Core.Application.Launch("MyApp.exe")
Accessing a form from your app:
Dim mainForm = app.GetWindow(SearchCriteria.ByAutomationId("MainForm"),
InitializeOption.NoCache)
Performing an action such as clicking a menu item:
mainForm.MenuBar.MenuItem("Edit", "Jobs...").Click()
Getting a control and validating its state:
Dim someTextBox = mainForm.Get(Of TextBox)(SearchCriteria.ByAutomationId("txtValue"))
Assert.IsTrue(someTextBox.Text = "12345")
I'm not sure if NUnit Forms has similar capabilities, but if not, maybe you should look into White. I ran into some issues setting it up so make sure to read the documentation carefully (not very exhaustive unfortunately) before setting it up.
I could use some help figuring out the best way to implement a "splash"/start-up page for my Silverlight 4 client applications that are built using Prism 2 and run out-of-browser.
I am supporting a suite of applications and am working on a common library of controls and services that all of the applications may use. As part of this, I am creating a subclass of the UnityBootstrapper class to register the services.
I've run into a situation where I need to 'pre-load' a couple of the services with data from the server on start-up. This could take a bit of time so we'd like to display a splash screen while all of the start-up steps are executed. Since we are running out-of-browser, I know this isn't straight forward. Any help is appreciated.
I'm also open to other approaches for start-up data that can't be 'lazy loaded'.
Check prism's sample project(under your PRISM installation):
Prism\Quickstarts\Modularity
That will show you how to KNOW when module loads/completed
You can just use Busy indicator with style over your Shell to indicate that you loading.
So after many trials and errors, I've come up with the following approach that I am now working through to see how well it works.
I've created a Shell UserControl in my class library that acts as a wrapper (container) for the UI. I set this control as the RootVisual. Within the content of this control, I add my splash control/view and make all of the necessary startup service calls. Using WaitHandles, I wait until all of the calls have returned before replacing the splash control with the application's start page.
The application has no idea how any of this works, which was my goal. They simply override a method I've added to the bootstrapper to make any startup service calls. The service calls are executed on a background thread and the code uses WaitHandle.WaitAll to block until all of the calls are completed which then uses Dispatcher.BeginInvoke to replace the splash with the application's main page.
This all seems to work pretty well.
Is it possible to write a key logger in Visual Basic.NET? Is this the right language to be using?
So far, I've gotten a console app to read input and append to a file.
1)How can I make a .NET program "catch" all keyboard input?
2)How do I make a process not show up in Task Manager?
This is not for a virus, but rather a parental control program for a specific clientele. No malicious intent here.
You need to set a Keyboard Hook.
This is extremely difficult and is not possible on 64-bit editions of Windows.
If you're really doing this with consent, this shouldn't be necessary.
Here's a sample of how to write a key logger in .net. http://www.scratchprojects.com/2008/09/csharp_keylogger_p01.php
Your best bet for making it not show up in Task Manager is to make it look like something that belongs. Call it "svchost.exe". :-)