Visual Studio Team Services Test Running - selenium

Apologies if similar has been asked before, I couldn't seem to find anything, just link me in the right direction if so.
I'm brand new to test automation, I will be writing selenium tests against a third party website hosted on an internal network. Our source control is provided by Visual Studio Team Services, although it is possible I can install TFS on premise.
Eventually I need to schedule test runs, I believe all this can be done with team services, seen some demo's, all good.
I will be using a URL to access the system under test which is on our internal network, if team services tries to run a selenium test and connect to the URL it will fail I imagine as it's running from wherever Microsoft are holding the code and building.
I don't think there would be a chance that we would allow Team services any access to our internal network if that was even possible.
So the question is, what are my options? can the build be moved from VS Team Services onto a local machine to run the tests with the internal URL? Is this a good idea if it can? Am i relying too much on the internet for testing on our internal network and is this a risk?
I have spent a bit of time on "the google" but struggling to find a great deal of information, it's possible I am asking the wrong questions.
Any help is greatly appreciated, links to articles are fine, don't mind doing the leg work, just need some pointers.
Many thanks for your help, apologies if any of that makes no sense.

You have a few options:
Install a VSTS Build agent on-premise and connect it to VSTS. The agent connects to VSTS using an outbound connection and it will be able to execute Builds and Release pipelines and from there orchestrate the execution of tests. You can either put this agent in a specific Agent Pool or Agent Queue, or you can add a Capability to it (e.g. "onprem"). By setting the Build Definition to use the specified Pool/Queue the agent will be selected. Or by adding the Demand "onprem" to your Build Definition it will ensure that it always requires that capability of any agent.
Use TFS 2015u3 or TFS2017 with the same agent, but that would mean you loose all the goodness that VSTS has to bring with regards to licenses, "free upgrades" and all.
With regards to security.
Adding a agent to your network that executes commands queued on a cloud service adds a risk. You can minimize that risk by configuring the build agent with a limited account, use Active Directory to limit the machines this user can run processes on/logon to and you can limit the access to this agent through permissions on the Queue and Pool as well. You can ensure that the users who have access to this pool and all your VSTS administrators have configured 2-factor-authentication on their AAD account and if needed add IP access control to these accounts as well. It's recommended that users that administer such agent pools/queues do not have alternate credentials configured and that the Personal Access Token used to register the agent is scoped to the permissions required to do just that.
With these extra measures in place you'll have a pretty secure setup. And it beats the hassle of having to install, backup, maintain a couple of TFS servers on-premise.

Related

How do I launch/publish my website? ASP.NET Core

I'm new to web development and just built my first website with .Net Core. It's primarily HTML, CSS, and JavaScript with a little C# for a contact form.
Without recommending any service providers (question will be taken down), how do I go about deploying the website? The more details the better as I have no idea what I'm doing haha.
Edit: I am definitely going to go with a service provider, however the business I am building the website for doesn't have a large budget so I want to find the best provider at the lowest cost.
Daniel,
As you suspect, this is a bit of a loaded question as there are so many approaches. One approach is to use App Services within Microsoft Azure. You can create a free trial Azure account to start that includes a 200.00 credit, which is more than enough to do all of this for free. Then, using the Azure Management Portal, create an App Service (also free) on an App Service Plan in a region that makes sense for you (i.e. US West). Once you do that, you can download what is called a Publish Profile from within the App Service's Management Portal in Azure.
If you're using Visual Studio, for example, you can then right click your project and "Publish" it (deploy to the cloud, or the App Service you just created). One option in that process is to import an Azure Publish Profile, which you can do with the one you just downloaded. This makes it really simple. The Publish Profile is really just connection information to your Azure App Service (open it in Notepad to see). It will chug for a bit and then publish and load the app for you. You can also get to the hosted version of your app by clicking the Url of the app in the App Service management portal on the main page.
This may be oversimplifying what you need to do, but this is a valid direction to take. AWS and others have similar approaches.
Again, tons of ways to do this, but this is a free approach. :-) I don't consider Azure a Service Provider in the sense that you asked us not to. Instead, I wanted to outline one turn-key approach with specific details on how to get there.
You can find specific steps in a lot of places, such as this link:
https://www.geeksforgeeks.org/deploying-your-web-app-using-azure-app-service/
DanielG's answer is useful, but you mentioned you don't want use any services from service provider.
Usually, there are only three ways to deploy the program,
first one is the app service provided by the service provider mentioned by DanielG,
**Benefits of using service provider products:**
1. Very friendly to newbies, follow the documentation to deploy the application in a few minutes.
2. It offers a very stable, scalable service that monitors the health of our website.
3. We can get their technical support.
**Shortcoming**
It is a paid service, and although Azure's service has a free quota, it will run out.
**Suggestion**
It is recommended that websites that are officially launched use the services of service providers.
second one is to use fixed IP for access (it seems that fixed iPv4 IP is not provided in network operations),
**Benefits of using fixed IP:**
If there is a fixed IP address, or if the carrier supports iPv6, we can deploy our website, and the public network can access it. And if you have domain, it also can support https.
**Shortcoming**
1. There are cybersecurity risks and are vulnerable to attack.
2. Without perfect website health monitoring, all problems need to be checked by yourself, and it is very troublesome to achieve elastic expansion.
**Suggestion**
It is generally not recommended because there is no fixed IP under normal circumstances. Broadband operators used to offer it, but now it doesn't.
If you are interested, you can try ipv6 to test.
the last one is to use tools such as ngrok or frp for intranet penetration.
**Benefits of using intranet penetration:**
Free intranet penetration services such as ngrok, the URL generated by each run is not fixed, and there are some limitations, such as a new URL will be generated after a certain period of time, which is enough for testing.
Of course you can purchase the service of this tool, which provides fixed URLs and supports https.
**Shortcoming (same as the second one)**
**Suggestion**
The functional implementation is the same as the second suggestion, and the physical devices used by the website are all their own. The intranet penetration tool (ngrok, or frp) solves the problem of not having a fixed IP, providing a URL that you can access.
There are few users and the demand for web services is not high, so it is recommended that individual users or small business users use ngrok and frp in this scenario. Generally suitable for OA use in small businesses.

API automation execution from CI/CD Platforms

My question is about API automation execution from CI/CD Platforms like Jenkins/Bamboo/Azure etc.
For API automation, it is important to have control on the machine from which the API’s are triggered as we may need to open Firewall for some API’s or add Certificates to java of that machine.
But if I run my API test from the CI/CD agent machines, I can not have that control as those agent machines are maintained by organizational level team who will not entertain any such modifications for the agent machines as they are used by all other teams.
How to overcome this issue? How is actually done?
If anyone out there, faced the same situation in their own company, I want to hear from them.
Thanks a lot for your support.

Liferay Cloud IDE, Multiple developpers working on same liferay server

We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.

How to rapidly publish web role cloud service, uploading only binaries, avoiding wholly restarting the VM?

Possible ways to accomplish it:
Creating dedicated WCF service for this purpose (currently my favorite option)
Using the REST API?
Azure PowerShell?
Explanation:
Publishing a web-role cloud-service takes about 10 minutes. It's much too long during development - I try to do as much as I can offline, unit-test-ish and modular, but it's just impossible to completely avoid development cycles altogether with the VM.
Apparently, the long time is mostly a result of the machine being wholly restarted, so I'm trying to find an automatic solution, like uploading and installing the binaries.
What is the best way to accomplish it?
What do you think? would it cut at least 50% of the publishing time?
Do you expect any critical problems?
The solutions proposed below are definitely against best practices and should NEVER-EVER be used in production environment.
If your objective is to quickly test your changes in your development environment, there are two ways you can go about it.
Enable RDP and copy your modified binaries or other files directly in the appropriate folders on the VM. You could enable Remote Desktop on your web role and copy the files manually in appropriate folders.
Use Web Deploy: This will only work for web roles in your project but you could enable Web Deploy on your Web Roles and use that to make faster deployment. Please see this link for more details on how to use this feature: https://msdn.microsoft.com/en-us/library/azure/ff683672.aspx.

WCF: Debugging service through Terminal Services

I'm part of a distributed development team. We all work through terminal services, accessing a remote server where our applications are located.
We're working on a project in which a client application consumes a WCF service, which exposes all the business logic functionality.
In our development process, a developer is often asked to develop an entire use case from user interface to database access, including the service and the business logic.
In such cases the developer must be able to debug the functions/methods on the server side that she/he has build for a given use case. The problem with that is that the service must be run and when another developer needs to debug his/her work, an exception is thrown (I think it is 'AddressAlreadyInUseException' not sure) and the 2nd developer is not able to perform any kind of debugging at the service. This happens even thought we (off course) have different windows usernames and hence we are working in different sessions.
It's still possible for the client app. to continue working with the 'original' service instance since we're catching the exception at the service, but debbugging is impossible. And if the first developer stops the wcf service then the app. fails.
I would like to know if you could have any recomendation for us. My be there's some sort of tool available (even if we must pay for it) that could somehow isolate each developers' workspace at the server... or may be we just need to change something in the way we work.
I would be very grateful for any kind of advice or clue.
Best regards,
Gonzalo
I would recomend that each developer had their own copy of the server services.
When we develop, each developer has a full environment on their machine. As things are completed, they are checked in to the version control system. When the other developers get the lastest version, new functionality is spread to the other developers.
If I understand your setup, all developers are working against the same server, in this case a programming error of one developer will stop all development.
Hey man, the debugger connects through IP communication. That means if a service or process binds a listener, no other service or process can bind this IP port a second time.
That is the reason for throwing the exception.
In Citrix you have the Virtual IP configuration.
You can also consider to place a VM on the server that serves only for one developer. This would also solve this problem