How can I copy Storyblok space to another account? - storyblok

I faced with the issue of copying entire space from Storyblok (from one account to another)
I can't find another way except
storyblok push-components <SOURCE> --space <SPACE_ID> --presets-source <PRESETS_SOURCE>
But this method doesn't includes assets and internationalization, and schemas.
Also find this way export as csv, but it doesn't include internationalization
Is there any way to copy entire space and put it on another account?

You can use this command to sync components, folder, roles, datasources or stories between spaces, but not accounts.
$ storyblok sync --type <COMMAND> --source <SPACE_ID> --target <SPACE_ID>
type: describe the command type to execute. Can be: folders, components, stories, datasources or roles. It's possible pass multiple types separated by comma (,).
If you want to move everything in the account contact their support team

Related

How can I search all files which are NOT in .gitignore?

I have to do a major search-and-replace update across multiple repos...
In my initial search, I get back over 30,000 results. Many of these results are inconsequential and shouldn't be touched because they are 3rd-party tools. Most of these tools are ignored in the repos by .gitignore.
So, I tried creating a new Search Scope which could include only non-.gitignored files only to find I must manually select all folders...
...or must I?
Is there a way to create a scope or otherwise search only non-.gitignored files?
NOTE: Do not confuse my request with PhpStorm/Intellij's "ignore" feature. I am specifically asking about .gitignore.

Is there a way to clone data from CrateDB into Crate running on a new container?

I currently have one container which runs Crate, and stores all its data in the /data/ directory. I am trying to create a clone of this container for debugging purposes -- ideally, the clone would be running Crate (which I can query) using the exact same data. I've tried mounting the same data directory into the /data/ directory of the cloned container and starting Crate, but when I run any queries, I notice that Crate shows 0 tables (that is, it doesn't recognize the data in the folder as database tables). How do I get around this? I know I can export and import data using COPY TO and COPY FROM, but I have so many tables that that would be quite cumbersome to write.
I’m a little bit wondering why you want to use the same data directory for debugging purposes, since you then modify data, which you probably don’t want to change. Also, the two instances would overwrite each others data, when using the same data directory at the same time. That’s the reason why this is not working.
What you still can do, is simply copying the folder in your file system and mount the second debugging node to the cloned folder.
Another solution would be to create a cluster containing both nodes as documented here: https://crate.io/docs/crate/guide/best_practices/docker.html.
Hope that helps.

How to add more data to be stored in jenkins rest api

To make the question simple, I know that I can get some build information with https://jenkins_server/...///api/json|xml|python. And I get a lot of information for that build record.
However, I want to add more information to that build record. For example, the docker image created, or the tickets or files changed from last build to create release note, ... etc. How do I do that?
For now, I use a script to create a json file as an artifact and call that json file to get these information, but it seems a duplicate if I can add more data to the jenkins build object directly.
The Jenkins remote access API is designed to provide access to generic Jenkins-internal information, like build numbers, timestamps, fingerprints etc.
If you want to add your own data there, then you must extend Jenkins accordingly, e.g., by designing a plugin that advertises your (custom) information items as standard Jenkins-"internal" data. If you want to do that, you may want to have a look at they way fingerprint information is handled (I found that quite instructive).
However, I'd recommend that you stick with your current approach, and keep generic Jenkins-internal information separated from Job-specific data. It is less effort and clearly separates your own data from Jenkins' data.

FlywayDB ignore sub-folder in migration

I have a situation where I would like to ignore specific folders inside of where Flyway is looking for the migration files.
Example
/db/Migration
2.0-newBase.sql
/oldScripts
1.1-base.sql
1.2-foo.sql
I want to ignore everything inside of the 'oldScripts' sub folder. Is there a flag that I can set in Flyway configs like ignoreFolder=SOME_FOLDER or scanRecursive=false?
An example for why I would do this is say, I have 1000 scripts in my migration folder. If we onboard a new member, instead of having them run the migration on 1000 files, they could just run the one script (The new base) and proceed from there. The alternative would be to never sync those files in the first place, but then people would need to remember to check source control to prior migrations instead of just looking on their local drive.
This is not currently supported directly. You could put both directories at the same level in the hierarchy (without nesting them) and selectively configure flyway.locations to achieve the same thing.
Since Flyway 6.4.0 wildcards are supported in flyway.locations. Examples:
db/**/test
db/release1.*
db/release1.?
More info at https://flywaydb.org/blog/organising-your-migrations

Dealing with sensitive password data in Perforce

What is the best practice for dealing with files containing sensitive password data in Perforce?
In my team's projects, we are trying to eliminate such files from the repository.
Are there any Perforce specific features/conventions that could help?
There is some xml files which contain code (not just passwords), which need to be under revision control. How are such files handled?
The following posts specific to git seem to be a good starting point:
Remove sensitive data
Best practices for github
You can use p4 obliterate to remove the offending versions of files containing passwords. You could use .example files containing example/fake passwords. Use your favorite scripting language to replace the passwords locally and create the real files. You could add p4 protect permissions to disallow particular files from being checked in.