block internet access with Test Complete - testing

Are there any way, to prevent the internet access of the tester application under the test whit Test Complete? I'd like to test the application's reaction of the lost of the internet connection, but I have to perform this whith a CI tool, which means the Test Complete have to block and unblock to connection.

You can do this using WMI directly from a script (see Working With WMI Objects in Scripts) or by executing a PowerShell script (see Running PowerShell Scripts From TestComplete).
For example, see this question to get a sample PS script:
Command/Powershell script to reset a network adapter

Related

Execute job spoon with software

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.
You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

Use SSH to execute robot framework test

I user SSH to execute robot framework testcases (selenium), but there is no browser opened, the testcases is executed in the background? How can I solve this issue?
I want to execute robot framework testcases on win10, and I want to start the test via Jenkins which is installed on Linux, so I installed a SSH plugin in the Jenkins, then I create a job in Jenkins and execute below command via SSH
pybot.bat --argumentfile E:\project\robot_framework\Automation\logs\argfile1.txt E:\project\robot_framework\Automation
when I start the job, the testcase is executed in the background, but I need the test case to open the browser in the front.
ssh by definition executes commands in a different session than the current user's.
Especially considering your target is a Windows machine - imagine if you were logged in and working with desktop apps, and someone starts an app through ssh in your session - would be a little surprising (mildly put :), wouldn't it?
To get what you want - being able to monitor the execution, you could try runas /user:username command, but a satisfactory end result is not guaranteed (plus you have to provide the target user's password).
Another option is to use psexec, available on TechNet, but YMMV - you have to provide the target session id, some programs might not show up - or might not be maximizable, etc. The call format would be
psexec.exe -i 2 pybot.bat the_rest_of_the_args
I.e. a lot of effort and uncertainties, with minimal ROI :)

Finalize wsadmin script task "createAuthDataEntry"

I have a follow-up question to this issue: is it possible to finalize the AdminTask.createAuthDataEntry task in one wsadmin script?
I need to invoke this task so that WAS can establish a connection to a datasource that I have defined in the same script.
Defining an auth entry from the web console does not require a restart. Typically I would not expect that a restart would be required for authentication changes.
I have tried to use the task AdminControl.invoke(AdminControl.queryNames('WebSphere:*,type=Server,node=%s,process=%s' % ('node', 'server')), 'restart') inside the script, but this stops the instance without booting it up again. Also, I cannot verify the datasource connection within the same script because of these limitations.
Creating or modifying authentication data entries from wsadmin requires a server restart. We have an RFE to allow wsadmin to make dynamic updates to them without a server restart which you can vote for. In order to stop and start your server using wsadmin, it's probably easiest for the OS-level (bat or sh) script that invokes wsadmin to call two scripts.

MSTests fail on build server after passing locally

This is a bit odd for me, I've worked through several micro-services with unit tests and haven't experienced a problem like this. The issue is that my unit tests pass locally but some fail on our build server. The oddity about this is that if I single out a failing test in the build script it will pass. If I run it with a test that was running before it I get the failure result. If I remote into the test server and access the test result file and rerun all the tests, they will all pass. So to me this says it has to do with my build environment - likely the "runner" context. The specific error I get on failed tests is:
System.Data.Entity.Core.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
Again, a test had ran and passed that was accessing the DB right before this failing test. Additionally it should be noted that these tests are using stored procedures to access the data and also are using LINQ to SQL (.dbml). I initially thought it had to do with the context not properly being disposed of but after several iterations of IDisposable implementation and peppering using statements throughout the data access code I think I have ruled that out. I even went so far as to rip out the .dbml reference and new up an entity model (.edmx) but ended up getting the same results the end, after much simplification of the problem. I can reproduce the issue with just 2 unit tests now, one will pass, one will fail. When ran separately they both pass, when ran manually either locally or on the build server both will pass.
Dev Server
We have our dev environment setup to be a remote server. All devs use VS 2013 Ultimate. All devs use a shared instance of localdb. This seems to be working fine, I am able to develop and test against this environment. All my tests pass here for the solution in question. Then I push code upstream to the build server.
Build Server
This is a windows 2012 server with GitLabs installed, every commit to our dev branches run build via the .gitlab-ci.yml build script. For the most part this is just simple msbuild -> mstest calls, nothing too fancy. This server also has its own shared instance of localdb running with matching schemas from the Dev environment. Several other repositories have passing builds/unit tests utilizing this setup. The connection strings for accessing data are all using integrated security, and the gitlab runner service account has full privs to the localdb. The only thing I can identify as notably different about the solution in question is the heavy use of sprocs, however like I was saying some of these unit tests do pass and they are all using sprocs. Like I also mentioned, after the build fails if I manually go in and access the test results file and manually invoke the tests on the build server they suddenly all pass.
So I'm not really sure what is going on here, has anyone experienced this sort of behavior before? If so, how did you solve it? I'd like to get these tests passing so my build can pass and move on.
Ok I have got the unit tests to pass but it was a weird thing that I ended up doing to get them to pass and I'm not quite sure why this worked. Even though the runners account had FULL "Server Role" privs on the localdb instance (all the boxes checked) I decided to throw up a hail marry and went through the process of "mapping" the user to the DB in question, setting his default schema to dbo and giving him full privs (all boxes checked). After this operation...the tests pass. So I'm clearly not understanding something about the way permissions are propagated in localdb, I was under the assumption that a server role of god-like would imply full privs to individual dbs but I guess not? I'm no DBA, I'm actually going to chat with our DBA and see what he thinks. Perhaps this has something to do with the sproc execution? Maybe that requires special privs? I do know in the passed that I've had to create special roles to execute sprocs, so they are a bit finicky. Anyways, unit tests pass so I'm happy now!

echoid.exe remote execution issue (wrong Locking Code output)

I am trying to bring locking code of a farm in automatic way.
So, i have on each remote server echoid.exe and a batch file.
The batch file simply execute the echoid.exe and write its output into a text file which i can parse.
The problem is when im triggering the .bat file remotely, it seems like the echoid.exe executed on the container host (the one im using to send execution command through psexec for example) rather than executing the code in the remote host (meaning- the locking code output is wrong) . If the same .bat file executed locally (and manually), the results are OK.
Any idea why? does anyone know how can i run the echoid remotely and get the correct results?
i have tried several remote action and all failed and brought wrong results :(
please help!
BTW all remote machines are WIN OS.