Issue when running tests as part of Team Services build process - asp.net-core

I have setup a build server at the company I work for.
This build server interactively works with Visual Studio Team Services.
Building works great and so does publish. The issue I am running in to is the ability to run "dotnet test" as a different user.
This is needed because currently the user the agent runs under is a service account. It has access to IIS and has the ability to move files where they need to be. But it does not have access to the database.
Since we have a few integration tests that we utilize, it errors out when connecting to the database because it is trying to connect as the service user.
So far I have not found a way to run "dotnet test" as a different user, specifically one that has access to query the database.
I tried utilizing the VSTS Task "Run Powershell on Remote Machines" since it lets me supply a username and password. But it seems to have issues trying to remotely connect to itself (which is probably understandable).
I am at a loss. I have no idea how to get this to work. Except giving the service user the ability to run those queries on the database.
Any help is greatly appreciated!

SQL authentication is the better way. So change connectionstring to use SQL authentication.
Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;
Password=myPassword;
Authentication article: Choose an Authentication Mode

You could start a process with the desired identity by passing appropriate credentials, e.g.
param($user, $pwd)
Start-Process $DOTNET_TEST_COMMAND -WorkingDirectory $DESIREDCURRENTWORKINGDIR -Credential (New-Object System.Management.Automation.PSCredential -ArgumentList #($user,(ConvertTo-SecureString -String $pwd -AsPlainText -Force)))
My opinion is that during a build only unit tests should be executed, as you could have side effects on the shared build machine if you execute more convoluted tests as functional tests.
Instead than running the functional tests on the build machine, I would suggest to use the Run Functional Tests task during a Release of VSTS, as it would allow you to:
distribute the tests on a pool of machines where you installed the test agent by means of the Deploy Test Agent task);
provide the credentials of the identity the tests run for, this functionality is present out of the box in the task, i.e. solve your problem at the root;

Related

Testing if user created in AD can be logged into on a VM

I am a QA automation engineer and in the web app I test there's a feature that creates Active Directory users.
My tools are - Selenium (Java), RemoteWebDriver, Selenium Grid (Docker)
I was trying to find ways to validate this process and came to a stop - this field (AD) is new
to me and I need to find a way to make sure the user was created and can be logged into in the
network.
I was trying to find a way to do this and came up with 2 options, where the first one is the least
preferred way:
Make a request (API? 3rd side tool?) to get the relevant user(s).
The issue:
A user created and registered in the AD doesn't necessarily mean that the client can log into it (at least by the way I understood how AD works), and so it loses the most important consequence of the feature.
Use a VM, get the AD user information (username + password: possible) and try to log into the VM using those details.
The issue:
I haven't came across a tool that does it, the closest thing is Robot class or WinAppDriver.
WinAppDriver seems like the best solution as of now although I don't know how to make the login process work since it's the process starts before the desktop is open and I don't know how to locate the username and password field, so I figured using Robot class seems like the simplest solution, if it works on a VM that is, which as of now doesn't seem like it does.
So, before advancing on learning how to use WinAppDriver with my current automation, I'd like and appreciate your opinions about the matter or if you have simpler solutions.
Thank you very much for reading!
• We can check whether a user is created successfully or not and if that user can log in to the AD domain or not by executing a script as below. It is a powershell script that auto logs in through remote desktop protocol in the other domain joined VM from an Azure domain joined VM that checks whether the recently created user can login or not.
Powershell script : -
cmdkey /list | ForEach-Object{if($_ -like "*target=TERMSRV/*"){cmdkey /del:($_ -replace " ","" -replace "Target:","")}}
echo "Connecting to 192.168.1.100"
$Server="192.168.1.100"
$User="Administrator"
$Password="AdminPassword"
cmdkey /generic:TERMSRV/$Server /user:$User /pass:$Password
mstsc /v:$Server
• In the above script, replace the ‘$user’ value by the user principal name of the newly created user, i.e., ‘$User=”testdemo#example.com”’ and the ‘$Password’ value by the password set for that user. Also, ensure that you replace and enter the correct IP address of the domain controller/AD server. Also, ensure that before executing the above powershell script, execute the below commands in an elevated (administrator privileges) powershell console.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
Lastly, please ensure that while creating the user, the option ‘User must change password at next logon’, ‘Account is Disabled’, ‘Password never expires’ and ‘User cannot change password’ are unchecked and not selected.
• Also, you can use the below command line script for logging in to the domain joined Azure VM through RDP protocol. In the below command, replace the ‘username’ and ‘password’ with the username and password of the user created recently to log in to the Azure VM with this command line script. Also, replace the ‘TERMSRC’ with the hostname of the server system or the domain joined VM where the specified UNC path is located and replace the ‘some_unc_path’ with the actual path UNC path of the shared directory folder. Please execute the below command through elevated (administrator privileges) command prompt.
Command script: -
c:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -nolog -command cmdkey /generic:TERMSRC/some_unc_path /user:username /pass:pa$$word; mstsc /v:some_unc_path

Use SSH to execute robot framework test

I user SSH to execute robot framework testcases (selenium), but there is no browser opened, the testcases is executed in the background? How can I solve this issue?
I want to execute robot framework testcases on win10, and I want to start the test via Jenkins which is installed on Linux, so I installed a SSH plugin in the Jenkins, then I create a job in Jenkins and execute below command via SSH
pybot.bat --argumentfile E:\project\robot_framework\Automation\logs\argfile1.txt E:\project\robot_framework\Automation
when I start the job, the testcase is executed in the background, but I need the test case to open the browser in the front.
ssh by definition executes commands in a different session than the current user's.
Especially considering your target is a Windows machine - imagine if you were logged in and working with desktop apps, and someone starts an app through ssh in your session - would be a little surprising (mildly put :), wouldn't it?
To get what you want - being able to monitor the execution, you could try runas /user:username command, but a satisfactory end result is not guaranteed (plus you have to provide the target user's password).
Another option is to use psexec, available on TechNet, but YMMV - you have to provide the target session id, some programs might not show up - or might not be maximizable, etc. The call format would be
psexec.exe -i 2 pybot.bat the_rest_of_the_args
I.e. a lot of effort and uncertainties, with minimal ROI :)

MSTests fail on build server after passing locally

This is a bit odd for me, I've worked through several micro-services with unit tests and haven't experienced a problem like this. The issue is that my unit tests pass locally but some fail on our build server. The oddity about this is that if I single out a failing test in the build script it will pass. If I run it with a test that was running before it I get the failure result. If I remote into the test server and access the test result file and rerun all the tests, they will all pass. So to me this says it has to do with my build environment - likely the "runner" context. The specific error I get on failed tests is:
System.Data.Entity.Core.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
Again, a test had ran and passed that was accessing the DB right before this failing test. Additionally it should be noted that these tests are using stored procedures to access the data and also are using LINQ to SQL (.dbml). I initially thought it had to do with the context not properly being disposed of but after several iterations of IDisposable implementation and peppering using statements throughout the data access code I think I have ruled that out. I even went so far as to rip out the .dbml reference and new up an entity model (.edmx) but ended up getting the same results the end, after much simplification of the problem. I can reproduce the issue with just 2 unit tests now, one will pass, one will fail. When ran separately they both pass, when ran manually either locally or on the build server both will pass.
Dev Server
We have our dev environment setup to be a remote server. All devs use VS 2013 Ultimate. All devs use a shared instance of localdb. This seems to be working fine, I am able to develop and test against this environment. All my tests pass here for the solution in question. Then I push code upstream to the build server.
Build Server
This is a windows 2012 server with GitLabs installed, every commit to our dev branches run build via the .gitlab-ci.yml build script. For the most part this is just simple msbuild -> mstest calls, nothing too fancy. This server also has its own shared instance of localdb running with matching schemas from the Dev environment. Several other repositories have passing builds/unit tests utilizing this setup. The connection strings for accessing data are all using integrated security, and the gitlab runner service account has full privs to the localdb. The only thing I can identify as notably different about the solution in question is the heavy use of sprocs, however like I was saying some of these unit tests do pass and they are all using sprocs. Like I also mentioned, after the build fails if I manually go in and access the test results file and manually invoke the tests on the build server they suddenly all pass.
So I'm not really sure what is going on here, has anyone experienced this sort of behavior before? If so, how did you solve it? I'd like to get these tests passing so my build can pass and move on.
Ok I have got the unit tests to pass but it was a weird thing that I ended up doing to get them to pass and I'm not quite sure why this worked. Even though the runners account had FULL "Server Role" privs on the localdb instance (all the boxes checked) I decided to throw up a hail marry and went through the process of "mapping" the user to the DB in question, setting his default schema to dbo and giving him full privs (all boxes checked). After this operation...the tests pass. So I'm clearly not understanding something about the way permissions are propagated in localdb, I was under the assumption that a server role of god-like would imply full privs to individual dbs but I guess not? I'm no DBA, I'm actually going to chat with our DBA and see what he thinks. Perhaps this has something to do with the sproc execution? Maybe that requires special privs? I do know in the passed that I've had to create special roles to execute sprocs, so they are a bit finicky. Anyways, unit tests pass so I'm happy now!

Powershell script to execute DDL statements on linked servers - not working when run using SSIS

I have a Powershell script that loops through a list of SQL Servers and creates server logins and database users.
The script runs on a separate server, under the administrator credentials on that server, and connects to the other SQL Servers via linked servers.
#Get administrator credentials
$password = get-content C:\Powershell\General\password.txt | convertto-securestring;
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\administrator",$password;
When this script is run manually (either directly through a Powershell window or using a batch file through a command prompt) it works perfectly well. I am logged onto the executing server as administrator when running the script manually.
I then tried to run this Powershell script using an SSIS package on the executing server, using the Execute Process Task to run a batch file. The package was executed from a SQL Agent Job. Although both the job and the package seemed to execute successfully, the DDL statements were not executed against the linked servers.
SQL Agent on the executing server is run under a designated Service Account. SSIS runs under the Network Service account.
Does anybody have any thoughts on what I might be doing wrong? I am happy to provide details of the script or anything else that is required.
Thanks
Ash
UPDATE: ok we have a little more information.
I took out the lines I posted above as I have discovered I don't actually need the administrator credentials I was retrieving.
I logged onto the server with the script on it using the service account. As per #ElecticLlama's suggestion I set a Profiler trace on the destination server. When running the script manually (or running a batch file manually that runs the Powershell script) everything works well and the Profiler shows the DDL actions, under the service account login.
When running a job through SQL Agent (either a CmdExec job or an SSIS package) that runs the same batch file, I get the following error:
'Login failed for user 'DOMAIN\ServiceAccount'. Reason: Token-based server access validation failed with an infrastructure error.'
Anybody have any further thoughts?
Thnaks to everyone for their help. Once I got that last error a quick search revealed I just had to restart SQL Agent and now everything works as it should. Thanks in particular to #ElecticLlama for pointing me in the right direction.
Ash

Stop IIS 7 Application Pool from build script

How can I stop and then restart an IIS 7 application pool from an MSBuild script running inside TeamCity. I want to deploy our nightly builds to an IIS server for out testers to view.
I have tried using appcmd like so:
appcmd stop apppool /apppool.name:MYAPP-POOL
... but I have run into elevation issues in Windows 2008 that so far have stopped me from being able to run that command from my TeamCity build process because Windows 2008 requires elevation in order to run appcmd.
If I do not stop the application pool before I copy my files to the web server my MSBuild script is unable to copy the files to the server.
Has anybody else seen and solved this issue when deploying web sites to IIS from TeamCity?
This article describes using an htm file named App_offline.htm to take a site offline. Once the IIS detectes this file in the root of a web application directory,
ASP.NET 2.0 will shut-down the application, unload the application
domain from the server, and stop processing any new incoming requests
for that application.
In App_offline-htm, you can put a user-friendly message indicating that the site is currently under maintainance.
Jason Lee shows the MSDeploy calls you need to use (plus much more about integrating these steps in your build scripts!).
MSDeploy
-verb:sync
-source:contentPath="[absolute_path]App_offline-Template.htm"
-dest:contentPath="name_of_site/App_offline.htm",computerName="copmuter_name",
username=user_with_administrative priviliges,password=passwort
After deployment you can remove the App_offline.htm file using the following call:
MSDeploy
-verb:delete
-dest:contentPath="name_of_site/App_offline.htm",computerName="computer_name",
username=user_with_administrative_priviliges,password=passwort
The msbuild community tasks includes an AppPoolController that appears to do what you want (though as noted it is dated and at present only supports IIS6.) An example:
<AppPoolController ApplicationPoolName="MyAppPool" Action="Restart" />
Note that you can also provide a username and password if necessary.
Edit: Just noticed that the MSBuild Extension Pack has an Iis7AppPool task that is probably more appropriate.
this is the fairly hackey workaround I ended up using:
1) Set up a limited-access account for your service to run as. Since I'm running a CruiseControl.NET service, I'll call my user 'ccnet'. He does NOT have admin rights.
2) Make a new local user account, and assign to the Administrators group (I'll call him 'iis_helper' for this example). Give him some password, and set it to never expire.
3) Change iis_helper's access permissions to NOT allow local login or remote desktop login, and anything else you might want to do to lock down this account.
4) Log in (either locally or through remote desktop) as your non-admin user, 'ccnet' in this example.
5) Open a command terminal, and use the 'runas' command to execute whatever it is that needs to be run escalated. Use the /savecred option. Specify your new administrative user.
runas /savecred /user:MYMACHINE\iis_helper "C:\Windows\System32\inetsrv\appcmd.exe"
The first time it will prompt you for 'iis_helper's password. After that, it will be stored thanks to the /savecred option (this is why we're running it once from a real command prompt, so we can enter the password once).
6) Assuming that command executed OK, you can now log out. I then logged back in as a local admin and turned off the 'ccnet' user for local interactive login, and remote desktop. The account is only used to run a service, but no real logins. This isnt a mandatory step.
7) Set up your service to run as your user account ('ccnet').
8) Configure whatever service is running (CruiseControl.NET in my case) to execute the 'runas' command instead of 'appcmd.exe' directly, the same as before:
replace:
"C:\Windows\System32\inetsrv\appcmd.exe" start site "My Super Site"
with:
runas /savecred /user:MYMACHINE\iis_helper "\"C:\Windows\System32\inetsrv\appcmd.exe\" start site \"My Super Site\""
The thing to note there is that the command should be in one set of quotes, with all the inner quotes escaped (slash-quote).
9) Test, call it a day, hit the local pub.
Edit: I apparently did #9 in the wrong order and had a few too many before testing...
This method also doesn't completely work. It does attempt to run as the administrative account, however it still runs as a non-escalated process under the administrative user, so still no admin permissions. I didn't initially catch the failure because the 'runas' command spawns a separate cmd window then closes right away, so I wasn't seeing the failure output.
Its starting to seem like the only real possibility might be writing a windows service that will run as admin, and its only purpose is to run appcmd.exe, then somehow call that service to start/stop IIS.
Isn't it great how UAC is there to secure things, but in actuality just unsecures more servers, because anything you want to do you have to do as admin, so its easier to just always run everything as admin and forget it?
You can try changing the Build Agent Service settings to log-on as a normal user account instead of SYSTEM (the default), this can be done from the services control panel (Start | Run | services.msc).
If it doesn't help, you can also try configuring the appcmd to always run elevated, refer to this document for details.
In case such option is not available for appcmd or it still doesn't work, you can disable UAC completely for this user.
Here you go. You can use this from CC.NET with NAnt or just with NAnt:
http://nantcontrib.sourceforge.net/release/latest/help/tasks/iisapppool.html