Cant upload template to Openshift Origin - openshift-origin

When I'm trying to upload template with command:
oc create -f postgres.json
CLI returns
error: bufio.Scanner: token too long

This is unfortunately a known issue: https://github.com/kubernetes/kubernetes/issues/12582.
You can work around it by making sure no line in your YAML file is longer than 64k characters.

Related

Bigquery authentication with .profile or .bashrc file

I am trying to set my environment variable (on Mac) to request from Google Bigquery by using the following guideline:
Source: https://cloud.google.com/bigquery/docs/reference/libraries
I'm doing it to avoid having to type in export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json" into VS Code terminal, everytime I want to run something from Bigquery.
How I do it? (which doesn't work)
(I create both .bashrc and .profile file, because not sure which one do I need) Create both files and put it in:
Insert into both files the following:
export GOOGLE_APPLICATION_CREDENTIALS="/Users/GunardiLin/Credentials/service-account-file.json"
Put the google credentials to: /Users/GunardiLin/Credentials/service-account-file.json
(The credential file is 100 % correct, because I tested it by inserting the following manually export GOOGLE_APPLICATION_CREDENTIALS=/Users/GunardiLin/Credentials/service-account-file.json . Afterwards my program to request on Bigquery works fine. Doing it manually everytime I start the VS Code is not ideal for me.)
After Step 3, I get the following error:
Can somebody help me with this problem?
I am using MacOS, VS Code, Conda. Thank you in advance.

Gitlab-CI: AWS S3 deploy is failing

I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.

Installing Google Adwords Api Library (using docker)

Googles documentation on installing the library, found here: https://github.com/googleads/googleads-php-lib/blob/master/README.md#getting-started, instructs us to copy adsapi_php.ini, as constructed here: https://github.com/googleads/googleads-php-lib/blob/master/examples/AdWords/adsapi_php.ini, to your home directory.
I filled out the necessary variables in the .ini, and I am using docker so I have placed this file inside my container at /var/www/home/node/ and when I run the command composer require googleads/googleads-php-lib I am given the following error in the command prompt:
Your requirements could not be resolved to an installable set of packages.
Problem 1
- Installation request for googleads/googleads-php-lib ^37.1 -> satisfiable by googleads/googleads-php-lib[37.1.0].
- googleads/googleads-php-lib 37.1.0 requires ext-soap * -> the requested PHP extension soap is missing from your system.
To enable extensions, verify that they are enabled in your .ini files:
- /usr/local/etc/php/php.ini
- /usr/local/etc/php/conf.d/adsapi_php.ini
- /usr/local/etc/php/conf.d/docker-php-ext-pdo_pgsql.ini
- /usr/local/etc/php/conf.d/docker-php-ext-sodium.ini
- /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
You can also run `php --ini` inside terminal to see which files are used by PHP in CLI mode.
Installation failed, reverting ./composer.json to its original content.
I assumed the issue was my adsapi_php.ini was simply in the wrong location as it contains what I believe is necessary to avoid the above issue, but I have tried placing it in several different places and yet I always get the same error.
Any help would be appreciated!
Just try to edit php.ini inside docker (docker exec -t {container} bashand enable there the extenstion soap

Amazon libs3 (environment variables)

Missing environment variable: S3_ACCESS_KEY_ID
is the error I am getting even after assigning it. I used aws configure command in which I inserted the environment variables. But while listing, I get this error. What should I do??.
COMMAND LINE::
$ export S3_ACCESS_KEY_ID=************
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
The immediate problem is that the environment variable is wrong.
You set:
export AWS_ACCESS_KEY_ID=
but it is looking for S3_ACCESS_KEY_ID:
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
What is possibly more interesting however, is that you did use aws configure in the first place, although this is not shown in recent edits, only in the images in original post. We would expect aws configure to correctly set the environment. And we would also expect the variables to be named AWS_* not S3_*. So why is s3 list looking for S3_*?
I can't find any reference to s3 list. Are you sure this is the correct command. Do you actually want to use something like: aws s3 ls ?
For newbie to AWS, read AWS CLI getting started documentation.
The recommended way for AWS cli is using aws configure to setup your credential and environment. If you insists to setup env variable manually, you need to make 3 export. (key shown are example shown from AWS CLI documentation)
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2

How to you suricata to work in IPS mode in Ubuntu?

i am trying to install suricata in vmware player and when i try the
suricata -c /etc/suricata/suricata.yaml
i get the error of
- [ERRORCODE: SC_ERR_CONF_YAML_ERROR(240)] - Failed to parse configuration line 382: did not find expected key
any help appreciated
This is usually a problem with formatting of the YAML file. YAML uses spaces for indent and usually this problem appears when tabs have been used instead.