I have files which I want to encrypt and store in RTC source control. Do the files need to be encrypted using an external program or does RTC / RTC source control support encrypting files within the stream ?
This thread summarizes the current situation regarding file encryption in RTC:
You would need to raise an enhancement request if you wanted this handled by RTC.
There are some other options that might help.
The underlying database should have security mechanisms to encrypt data - it is worth looking at that.
Next, you could also encrypt the disk the data is stored on (for example, I have worked with a customer that used BitLocker - part of Windows Server - to encrypt the disk RTC and the database runs on).
It is worth asking more questions about the request. For example - who are they trying to conceal the source code from?
So using an external program for file encryption would still be required with the current versions of RTC (4.x).
Related
I'm working on implementing a custom language server and a VSCode language extension. My starting point for the client side is lsp-sample. My server implementation is entirely from scratch, in a different language (not JS).
Currently, I've successfully set up textDocument/didOpen and textDocument/didChange messages to be sent by the client and received by the server. However, I'm having trouble figuring out how to synchronize all files in the VSCode workspace, not just those that the user has opened. I can't find where this is supported in the protocol. The only text document synchronization capabilities I see are for documents opening, closing, and edits. What about all the other documents in the workspace?
For example, in order to handle "goto definition" requests, the server needs to know about definitions in other files, perhaps those that have never been opened or edited by the user.
A hacky solution would be, on the server side, to parse the URI of the workspace and just go load a bunch of files manually. But this seems like something that the LSP should support; perhaps I'm just missing where it's documented. (Also, it feels like I would be violating the spirit of the LSP design to do some covert ops like this behind the scenes, without communicating with the client.)
Perhaps you're looking for
DidChangeWatchedFiles Notification
See Specification
The watched files notification is sent from the client to the server
when the client detects changes to files and folders watched by the
language client (note although the name suggest that only file events
are sent it is about file system events which include folders as
well). It is recommended that servers register for these file system
events using the registration mechanism. In former implementations
clients pushed file events without the server actively asking for it.
Basically the client is responsible of watching the files you require and sends a notification to the server each time something changes on them. At this point the server is capable to load them.
I have a program which looks for a config.json file where it reads needed sensitive infos like DB creds, different APIs creds, etc. I don't upload the config file to the git repository because I understand it's a bad approach, although it's a private repository. Now I'm starting to fear the case that I by accident delete this file, or due to a failure in my machine, I could permanently lose it. My question is - what is the best approach I could use to have a constant secured backup for this file, considering that it may contain very sensitive informations?
Also I would like to specify that this config file is frequently changed (and may increase in size...).
Select, implement, and test a backup system that meets your requirements for securing sensitive data. Access controls to the backup system, encrypting backup media, and logging jobs run are fundamental features to manage data.
Storing secrets in version control like git is tempting. But beware, a git repo may be cloned to many places, and every copy contains your credentials forever. Deleting them permanently requires rewriting history. Possibly easier to change any creds that got committed, leave the old ones in history, and don't commit secrets in the future.
Think about how you want to manage secrets. Secrets management software exists that wraps creds and keys in strong authentication and encryption. Building the application server could involve installing the application, and retrieving the API creds via the secret server. It may suit your needs to have different systems to store automation scripts, secrets, and backups.
I've created a SAP program and I want to deploy it in another SAP system.
I know I can import the Transport Request files with the created program to the new system but I'm looking for other options.
Is it possible to "install"/import my program to another SAP system?
Regards
I can only think you don't want to use the transport system because the systems are not part of the same landscape? If so, you can still use the transport system, you just need to manually move the required files around.
But, there is another approach you can follow - using SAPlink. It's an open source program that allows you download ABAP source, dictionary objects, etc. from one system into files and then upload them into another system. Of course, both systems will need to have SAPlink installed for this to work.
This is somewhat by design, SAP is the largest OTS system available and there has to be some controls to ensure that people can not install software if they are not specifically given the authorization to do so.
Even to use SAPlink ( that mjturner suggests ) requires you to have the ability to install that software first and I doubt you will find it in very many productive landscapes so likely that wont be an option.
Assuming you have a developer authorization you can always download the source code from your development SAP system and then upload from within the ABAP editor (SE38) using utilities -> More utilities->Upload/Download. Note that this doesn't work in the class editor so cut and paste is another option.
Later.......
There are three ways to move transports from one system to another.
1. Moving the transport files form the directories “data” and “cofiles” manually.
When the transport is released in the source system SAP automatically puts the transport files into the transport directory on file system. This files easily can be copied to the second system an be imported via transaction “STMS”.
2. Using CAR files
CAR files are packed files like a zip file. The contain the data and cofiles.
car.exe -cvf packedFile.car data\R900000.XXX cofiles\K900000.XXX
(car.exe is a SAP standard tool, XXX is the system ID)
This CAR files can be imported via transaction SAINT. This allows import files from frontend into the data and cofiles directory without direct access to the file system. After importing the file via SAINT the transport can be imported using STMS. This is be the common way to transport software to other systems outside the current landscape.
3. Using SAR and PAT files
These files are more special. They allow to install software as Add-On in SAP. This is required if the program should be certified by SAP. They have to be created using the AAK (SAP Add-On Assembly Kit). Unfortunately, I have not created this files myself yet. But it seems to be very complex to get this running, because there are some checks which have to be passed. The files can be imported via transaction SPAM (upload) and SAINT (import).
So, a common practice these days is to put connection strings & passwords as environment variables to avoid their being placed into a file. This is all fine and dandy, but I'm not sure how to make this work when trying to set up a continuous deployment workflow with some configuration management tool such as Salt/Ansible or Chef/Puppet.
Specifically, I have the following questions in environments using the above mentioned configuration management tools:
Where do you store connection strings/passwords/keys separate from codebases?
Do you keep those items in a code-repo of some type (git, etc.)?
Do you use some structure built-in to your tool?
How do you keep those same items secure?
Do you track changes/back-up these items, and if so, how?
In Chef you can
store passwords or API tokens in either encrypted data bags or using chef-vault. They are then decrypted while chef does the provisioning (with encrypted data bags using a shared secret, with chef-vault using the existing PKI of Chef client).
set environment variables when calling external software using the environment parameter of e.g. the execute resource.
not sure, what to write here -- I'd say you don't really manage them. This way you set the variables only for the command that needs it, not e.g. for the whole chef run.
With puppet, the preferred way is probably to store the secrets in Hiera files, which are just plain YAML files. That means that all secrets are stored on the master, separate from the manifest files.
truecrypt virtual encrypted disks are cross-platform and independent of tooling. Mount it read-write to change the secrets in the files it contains, unmount it and then commit/push the encrypted disk image into version control. Mount read-only for automation.
ansible-vault can be used to encrypt sensitive data files. A CI server like Jenkins however is not the safest place to store access credentials. If you add Hashicorp Vault and Ansible Tower/AWX, then you can provide a secure solution for several teams.
We have SSIS package config files that contain DB encryption passwords or PGP encryption passwords. I came to the conclusion that there is no "silver bullet" solution for encrypting SSIS package config files like with web.config files ect.
Should we consider not using config files at all for SSIS packages and if so what other options do we have available for storing settings?
Encryption of configuration files are not handled by SSIS itself. You can use NTFS encryption and/or ACLs to control access to config files and contents. It beats learning and administering a new access/encryption mechanism, and nicely ties in to your AD efforts.
Another option is to store the configurations in a SQL table and use SQL security to control access, but most administrators seem to prefer file-based management.
Could you use a table for config storage and lock down access to it? Throw that database/log/backup in an EFS protected folder and the only people that could access would be SQL sysadmins/or authorized accts and whoever has access to decrypt efs with a recovery account and restore the database (domain admins?).
You could also use SQL 2005's native encryption and write your own procedure to access the data and then set the connection properties in a script task. I haven't done this, but theoretically it might work.
While storing configuration information in a database is a viable alternative, if you are stuck with XML configuration files (for a variety of reasons), you may try BI xPress Secure Configuration Manager or SSISCipherBoy (freeware, I am affiliated to this project). These two utilities answer your question precisely.