Outsystems - Add another environment - development-environment

I previously created a personal environment in Outsystems 11, I need to create a new one in Outsytems 10 because it is the version I needed. I searched over the internet and saw how to add another environment in Lifetime cloud but the options in my environment management looks like this only.
How can I add a new environment? Thanks for any help.

In a Personal Environment you can have only one stage/environment. Adding other stages is only available on paid subscriptions.
In any case, it's no longer possible to launch new version 10 infrastructures. Only v11 infras are now available.

Like Miguel said, only paid subscriptions can have multiple environments, and each environment is paid. Also personal environments have the last version of the platform. Only creating another account, to have a new environment.

Related

How to extract environment variables in Rancher automatically

First of all, sorry if this thread is not appropiated in Stack Overflow, but I think that is the best place of all.
We are using Rancher to manage a microservices solution. Most of the containers are NodeJS + Express apps, but there are others like Mongo or Identity Server.
We use many environment variables like endpoints or environment constants and, when we upgrade some of the containers individually, we forget to include them (most of the times, the person who deploys an upgrade is not the person who made the new version).
So, we're looking a way to manage them. We know that using a Dockerfile could be the best way, but if we need to upgrade just one container, we think that is too many work for just a minor change.
TLDR; How do you manage your enviromental variables in Rancher? How do you document them or how you extract them automatically?
Thanks!
Applications in Rancher are generally managed using Stacks/Services. Dockerfile is used to build a container image. docker-compose/rancher-compose files are used to define the applications. The environment variables can be specified in docker-compose file.
When you upgrade a service in rancher, the environment variables information is carried forward and also it's possible to edit them before upgrade.
Also Rancher "Catalog" feature might be something useful for you. Checkout: https://rancher.com/docs/rancher/v1.6/en/catalog/

What is a difference between vcenter's template and virtual machine?

As in the topic.
I wonder since I cannot find this information anywhere and currently I am using a virtual machine (linux) on my vcenter which is cloned and then a special shell script is run on this freshly cloned machine to setup up environment and IP adresses etc.
Maybe I would be able to benefit from templates this way.
I think this will be helpful
https://www.robertparten.com/virtualization/vmware-difference-between-clone-and-template/
Few Differences in my opinion:-
Virtual machine is the running instance while Template is compact copy of VM ( with baseline and factory settings), which can be stored anywhere.
one need to deploy template to make running VM.
one can create copy from both VM and template but in VM you need to clone it and in case of template you need to deploy it.
moving between different setup is easy with template.
Rest are already mentioned in link provided.
But first you need to search on your own and still have doubts than only ask, that's how we all learn.
Happy Learning!
Looking at these two scenarios:
Create a template from your active VM, then deploy from the template.
Deploy from the active VM directly.
As far as I know, there will be no difference in the end result if you run these scenarios in the near future. You'll still have to run a script in order to get your IPs setup, etc.
So what's the difference?
If you mess stuff up with your active VM, change things around or whatever, you lose the ability to deploy from the (good) setup you had.
Once you make a template from your active VM, that configuration is saved as a file on the ESX (or the storage, not 100% sure) and can be re-deployed in the future.

Plesk Migration without Migration Manager

I've got a problem to migrate A user from one server to another.
I tried to use the migration manager, but if I start the migration manager the migration will be started but after 2 seconds it has finished and no migration has be doen.
What can I do? Is there anything I can do?
or should I moove the data manually?
You can try to use Plesk Mass Transfer Script.
The Plesk Mass Transfer Script (formerly Mass Migration Script) is designed to allow providers transferring accounts from one Plesk farm to another one by an automated way.
The script will create migration sessions for each domain only if you run mmigration.php with '--per-domain' option. By default single migration session is created.
More details and scenarios you can find here http://kb.sp.parallels.com/en/113283
Okay I solved the problem. I had to increase the number of free domains!.

How to add a new project in Trac via Bitanami hosting

I have launched a TRAC demo server in cloud using Bitnami hosting. I just want to check how to work with multiple projects in TRAC. Now I can see only one project in the demo server and no options are there to add new project.
Bitnami wiki explains how to create a new project in windows/mac via cmd but I can't find any info about project creation in cloud. Can somebody help me with this?
It's not fully clear what you understand of "projects"; that term is ambiguous. But there are 2 possibilities:
On the level of Apache and its Trac instances:
bitnami-trac-1.0.1\apache2\conf\httpd.conf contains an include of bitnami-trac-1.0.1\apps\trac\conf\trac.conf and you can add another <Location> of a new Trac instance there. This will allow to run multiple Tracs within one Apache. See this Trac wiki about MultipleProject for details. Basically, first you need to create a second Trac instance (with its database) by calling bitnami-trac-1.0.1\apps\trac\Scripts\trac-admin.exe from Bitnami command line shell.
On the level of Trac and its plugins:
you may want to setup several "user projects" within one running Trac instance. Read this Trac wiki about MultipleProjects/SingleEnvironment for details. Basically, you'll need to install and setup a plugin called SimpleMultiProjectPlugin.
You need to access your machine and use the command line. Take a look at the documentantion for accessing your server: http://wiki.bitnami.com/BitNami_Cloud_Hosting/Servers/Access_your_machine

Triggering iOS build/test job via Github pull request on CloudBees

I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).