Using a separate Spartacus CDC configuration for each SAP Commerce Cloud environment - spartacus-storefront

According to the documentation, the Spartacus CDC plugin can be configured as followed:
provideConfig(<CdcConfig>{
[CDC_FEATURE]: [
{
baseSite: 'electronics-spa',
javascriptUrl: 'https://cdns.<data-center>.gigya.com/JS/gigya.js?apikey=<Site-API-Key>',
sessionExpiration: 3600,
},
],
}),
However, we have the requirement to use a different javascriptUrl per SAP commerce cloud environment.
We tried to solve this by using multiple environment.xxx.ts files in which we define the URL. However, as far as I understand, there is no way to add parameters to the build process in the cloud. Thus, different environments cannot be used like this. There is a related ticket to this on Github.
My question is now: Did anyone ever do this? Is there a way to switch this URL depending on the cloud environment (dev, staging, prod)?

Related

Pushwoosh how to export presets

In Pushwoosh I have to configure over 200 presets in 3 environments(test, UAT, and Production). Is there a way to export and import presets?
It would be too time-consuming if I have to manually create them in each environment.
There is a couple of ways to clone presets programmatically.
Open API
This way is preferrable if you'd like to clone presets across different accounts.
1.1. Obtain the list of presets via listPresets API
1.2. Filter the list (if needed)
1.3. For each preset that you'd like to clone, obtain preset information using getPreset API
1.4. Prepare mapping of properties as per the guide
1.5. Create new presets at target account using createPreset API
If your preferred scripting language is Python, you may use this library for easy access to these API methods.
Internal Browser API
If you want to simply clone presets between two apps/projects of the same account, you may opt to use Javascript macros from this Gist.
This is how you do it:
2.1. Log in to Pushwoosh account where you want to clone presets/events
2.2. Make sure that DESTINATION app has all platforms configured, that are selected in presets
2.3. Make sure the Max Presets limit of your account would allow the total number of presets to be created.
2.4. Open the Console of your browser where you logged in to your Pushwoosh account
Set up HOST variable to point to your dedicated server. E.g. if your dedicated server address is subdomain.pushwoosh.com, you should do the following:
type const HOST="subdomain"; and press Enter
If your account is on go.pushwoosh.com, set the HOST variable to "go".
2.5. Copy-paste contents of migrate-presets-events.js to Console and hit Enter.
2.6. Now launch command to migrate all presets from SRC to DST:
await migrate_presets('SRC_APP_CODE', 'DST_APP_CODE');
If either of methods looks challenging, feel free to contact our support for assistance: help#pushwoosh.com

How does Gatsby hide API-keys on the frontend

So, I'm struggling to understand how Gatsby works. I'm using the https://www.gatsbyjs.org/starters/AlexanderProd/gatsby-shopify-starter/ which uses a Gatsby plugin called gatsby-source-shopify. The plugin takes two params: shopName and accessToken. It looks like this in gatsby-config.js:
{
resolve: `gatsby-source-shopify`,
options: {
// The domain name of your Shopify shop. This is required.
shopName: process.env.SHOP_NAME,
// An API access token to your Shopify shop. This is required.
accessToken: process.env.SHOPIFY_ACCESS_TOKEN,
},
},
Will the access token be available for people to look at when I deploy the app? Do I need to use something like Serverless functions to hide my API keys, or is this fine. Any general explanation of how this works in Gatsby would be awesome.
Thanks Gatsby fam!
As the code shows, it uses process.env.SHOP_NAME where SHOP_NAMEis the name of the environment variable. Those files are declared at the root of the project using some naming such as .env.domain1.com. In this file, you can store any desired variable to use it in your Gatsby configurations. When dealing with delicate variables (API keys, tokens, passwords, etc) it's recommended to use that way and ignore all .env files in your .gitignore.
When you trigger a command in Gatsby, you can pass it some variables, for example:
"develop": "GATSBY_ACTIVE_ENV=domain1.com gatsby develop"
In this case, GATSBY_ACTIVE_ENV var will have domain1.com as a value. Then, in you gataby-config.js, when you can use environment variables (above module.exports):
require("dotenv").config({
path: `.env.${process.env.NODE_ENV}`,
})
Then, you can create an environment file such as .env.domain1.com in your root project and store any desired variable:
SHOP_NAME: 12345
Taking into account the code you've provided if you run develop (with all I've explained) command it will take SHOP_NAME as 12345.
So, answering your question, you won't have access to that tokens. You need to store them in your local machine and in your deploy server, not in your repository.
From Gatsby docs:
Please note that you shouldn’t commit .env.* files to your source
control and rather use options given by your Continuous Deployment
(CD) provider...
Edit: Thanks to #Hans Martin Henken for providing the following article about Gatsby security

How to securely set up continuous delivery?

Setup:
Private master repo and every developer has their own private fork.
Currently using CircleCI, but we'd be happy to switch to satisfy requirements
Branches on master repo are protected with merge restrictions
Requirements:
Build + test on forked pull requests
Deploy to different environments based on master repo branch updates
Not all developers can be fully trusted with production credentials
Partial Solution:
Enable building and passing secrets on forked pull requests (Reference)
Use CircleCI contexts to set environment variables per branch. This allows different deploy targets.
Problems:
All repo specific secrets as well as all global contexts are now accessible by anyone who can open a PR.
Even if we disable building on forked pull requests, anyone with write access to at least one repo can access all global contexts.
Question:
This would seems to be a very common use case. How do other companies solve it?
Is CircleCI not the right tool for this? - No, it is not (see below).
Should we build a custom solution?
Edit1:
CircleCI got back to me and surprisingly this is not a use case they support. Looking into other providers now. Above questions are still unanswered.
Edit2:
I've also contacted TravisCi and SemaphoreCi and it appears that only TravisCi supports building forked PRs and not leaking secrets into them (Reference).
SempahoreCi is missing (1) building forked PRs and (2) hiding secrets from the deployment phase in non-master workflows
CircleCi has restricted contexts, but they would require manually changing workflows. Definitely not easy to set up and I don't fully understand how they would work.

Triggering iOS build/test job via Github pull request on CloudBees

I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).

Need advice regarding deployment on multiple remote machines

Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.