Error from server: User "system" cannot create imagestreams in project "openshift" - openshift-origin

I'm quite new with openshift.org. I tried to build a cluster, with 3 masters (including etcd), 1 lb and 2 nodes.
I'm building that from ansible as described in https://docs.openshift.org/latest/install_config/install/advanced_install.html#multiple-masters
Ansible works great until:
TASK: [openshift_examples | Import Centos Image streams]
failed: [...] => {"changed": false, "cmd": ["oc", "create", "-n", "openshift", "-f", "/usr/share/openshift/examples/image-streams/image-streams-centos7.json"], "delta": "0:00:00.290493", "end": "2016-01-25 18:30:04.688765", "failed": true, "failed_when_result": true, "rc": 1, "start": "2016-01-25 18:30:04.398272", "stdout_lines": [], "warnings": []}
stderr: Error from server: User "system" cannot create imagestreams in project "openshift"
[...]
Looks like etcd is OK. Reported 3 masters healthy:
cluster is healthy
member 2025245ceaafe339 is healthy
member b2e385dc8675fe92 is healthy
member fd304b55f10870a is healthy
When I tried to get node, I got an empty list, which may look bad...
oc get nodes
If I tried to login, I got the following:
oc get nodes
Error from server: User "system" cannot list all nodes in the cluster
Is it a known issue? Where do you suggest me to check what is failing?

It seems that you are not logged into openshift as system:admin.
To login as system:admin from the openshift machine:
oc config view
oc login -u system:admin
To check if you are logged in as system:admin, you can do oc whoami.

Related

An error occurred (403) when calling the HeadObject operation: Forbidden in airflow (2.0.0)+

Error -
*** Failed to verify remote log exists s3://airflow_test/airflow-logs/demo/task1/2022-05-13T18:20:45.561269+00:00/1.log.
An error occurred (403) when calling the HeadObject operation: Forbidden
Dockerfile -
FROM apache/airflow:2.2.3
COPY /airflow/requirements.txt /requirements.txt
RUN pip install --no-cache-dir -r /requirements.txt
RUN pip install apache-airflow[crypto,postgres,ssh,s3,log]
USER root
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
USER airflow
Under connection UI -
Connection Id * - aws_s3_log_storage
Connection Type * - S3
Host - <My company's internal link>. (ex - https://abcd.company.com)
Extra - {"aws_access_key_id": "key", "aws_secret_access_key": "key", "region_name": "us-east-1"}
Under values.yaml -
config:
logging:
remote_logging: 'True'
remote_base_log_folder: 's3://airflow_test/airflow-logs'
remote_log_conn_id: 'aws_s3_log_storage'
logging_level: 'INFO'
fab_logging_level: 'WARN'
encrypt_s3_logs: 'False'
host: '<My company's internal link>. (ex - https://abcd.company.com)'
colored_console_log: 'False'
How did I created the bucket?
Installed awscli
used the commands -
1. aws configure
AWS Access Key ID: <access key>
AWS Secret Access Key: <secret key>
Default region name: us-east-1
Default output format:
2. aws s3 mb s3://airflow_test --endpoint-url=<My company's internal link>. (ex - https://abcd.company.com)
I am not getting a clue on how to resolve the error. I am actually very new to airflow and helm charts.
I had same error message with you. your account or Key might not have enough permission for accessing S3 bucket.
Please check your role has enough permissions below.
"s3:PutObject*",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:GetObject*",
"s3:ListObject*",
"s3:ListBucket*",
"s3:PutBucket*",
"s3:GetBucket*",
"s3:DeleteObject

400 bad request when attempting connection to AWS Neptune with IAM enabled

I am unable to connect to neptune instance that has IAM enabled. I have followed the AWS documentation (corrected a few of my silly errors on the way) but without luck.
When I connect via my Java application using the SigV4Signer and when I use the gremlin console, I get a 400 bad request websocket error.
o.a.t.g.d.Handler$GremlinResponseHandler : Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 400 Bad Request
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:267)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:302)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
When I run com.amazon.neptune.gremlin.driver.example.NeptuneGremlinSigV4Example (from my machine over port-forwarding AND from the EC2 jumphost) I get:
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
I am able to connect to my neptune instance using the older deprecated certificate mechanism. I am using a jumphost ec2 instance and port-forwarding.
I believe that the SigV4 aspect is working as in the neptune audit logs I can see attempts to connect with the aws_access_key:
1584098990319, <jumphost_ip>:47390, <db_instance_ip>:8182, HTTP_GET, [unknown], [unknown], "HttpObjectAggregator$AggregatedFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 0, cap: 0, components=0)) GET /gremlin HTTP/1.1 upgrade: websocket connection: upgrade sec-websocket-key: g44zxck9hTI9cZrq05V19Q== sec-websocket-origin: http://localhost:8182 sec-websocket-version: 13 Host: localhost:8182 X-Amz-Date: 20200313T112950Z Authorization: AWS4-HMAC-SHA256 Credential=<my_access_key>/20200313/eu-west-2/neptune-db/aws4_request, SignedHeaders=host;sec-websocket-key;sec-websocket-origin;sec-websocket-version;upgrade;x-amz-date, Signature=<the_signature> content-length: 0", /gremlin
But when I look
This is the policy that I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"neptune-db:*"
],
"Resource": [
"arn:aws:neptune-db:eu-west-2:<my_aws_account>:*/*"
]
}
]
}
I have previously tried with a policy that references my cluster resource id.
I created a new api user with this policy attached as its only permission. (I've tried this twice).
IAM is showing my that the graph-user I created has not successfully logged in (duh).
Seems that the issue is with the IAM set-up somewhere along the line. Is it possible to get more information out of AWS with regards to why the connection attempt is failing?
I am using the most recent release of Neptune and the 3.4.3 Gremlin Driver and console. I am using Java 8 when running the NeptuneGremlinSigV4Example and building the libraries to deploy to the console.
thanks
It appears from the audit log output that the SigV4 Signature that is being created is using localhost as the Host header. This is most likely due to the fact that you're using a proxy to connect to Neptune. By default, the NeptuneGremlinSigV4Example assumes that you're connecting directly to a Neptune endpoint and reuses the endpoint as the Host header in creating the Signature.
To get around this, you can use the following example code that overrides this process and allows you to use a proxy and still sign the request properly.
https://github.com/aws-samples/amazon-neptune-samples/tree/master/gremlin/gremlin-java-client-demo
I was able to get this to work using the following.
Create an SSL tunnel from you local workstation to your EC2 jumphost:
ssh -i <key-pem-file> -L 8182:<neptune-endpoint>:8182 ec2-user#<ec2-jumphost-hostname>
Set the following environment variables:
export AWS_ACCESS_KEY_ID=<access_key>
export AWS_SECRET_ACCESS_KEY=<secret_key>
export SERVICE_REGION=<region_id> (i.e. us-west-2)
Once the tunnel is up and your environment variables are set, use the following format with the Gremlin-Java-Client-Demo:
java -jar target/gremlin-java-client-demo.jar --nlb-endpoint localhost --lb-port 8182 --neptune-endpoint <neptune-endpoint> --port 8182 --enable-ssl --enable-iam-auth

LDAP Authentication - OpenShift - OKD

I have deployed a new OKD cluster (3.11) and as Identity Provider I have selected LDAPPasswordIdentityProvider
The configuration goes like this:
openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=service,cn=users,cn=accounts,dc=myorg,dc=com', 'bindPassword': 'reallysecurepasswordhere', 'insecure': 'false', 'url': 'ldaps://idm.myorg.com:636/dc=myorg,dc=com?uid??(memberof=cn=openshift,cn=accounts,dc=myorg,dc=com)'}]
I have tried two dozens of possibilities with this URL.
On the logs I always get:
I0528 15:23:38.491659 1 ldap.go:122] searching for (&(objectClass=*)(uid=user1))
E0528 15:23:38.494172 1 login.go:174] Error authenticating "user1" with provider "idm": multiple entries found matching "user1"
I don't get it why is the filter showing as (&(objectClass=*)(uid=... appears as the filter isn't being parsed correctly, despite the URL being as above.
I also checked the master-config.yaml and it is correct as my ini file.
If I do ldapsearch I get the expected results:
$ ldapsearch -x -D "uid=service,cn=users,cn=accounts,dc=myorg,dc=com" -W -H ldaps://idm.myorg.com -s sub -b "cn=accounts,dc=myorg,dc=com" '(&(uid=user1)(memberof=cn=openshift,cn=groups,cn=accounts,dc=myorg,dc=com))' uid
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <cn=accounts,dc=myorg,dc=com> with scope subtree
# filter: (&(uid=user1)(memberof=cn=openshift,cn=groups,cn=accounts,dc=myorg,dc=com))
# requesting: uid
#
# user1, users, accounts, myorg.com
dn: uid=user1,cn=users,cn=accounts,dc=myorg,dc=com
uid: user1
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
The LDAP Server is FreeIPA.
Help please!
Ok, I found the solution to the problem.
I assumed ... incorrectly ... that running the playbook openshift-ansible/playbook/openshift-master/config.yml would restart the openshift-master API.
It doesn't.
I noticed this when, instead of editing my ini inventory where I have this set and running config, I started editing directly on /etc/origin/master/master-config.yaml and using master-restart api to restart the API.
Several URL alterations (many incorrect actually) had never been ran. Config uploaded them, but the master api doesn't restart, so new config doesn't go in place, and I kept hitting the wall.

not authorized to perform: rds:DescribeDBEngineVersions

I implemented a REST api in django with django-rest-framework,on localhost working fine with successful results.
When pushing this up to an existing AWS elastic beanstalk instance, I received:
{
"detail": "Authentication credentials were not provided."
}
For solution I followed this question : Authorization Credentials Stripped
But when I push mycode on aws EB I am getting this error :
Pipeline failed with error "Service:AmazonRDS, is not authorized to perform: rds:DescribeDBEngineVersions"
I tried lots of solutions but every time I am getting this error.
Note: I am using python3.6
I got the answer of my problem.
I set the RDS policy and create new custom_wsgi.config file on .ebextensions directory and write command :
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On

Doctrine (with Symfony2) only tries connection to DB using root#localhost

The error:(occurring in the prod env)
request.CRITICAL: PDOException: SQLSTATE[28000] [1045] Access denied for user 'root'#'localhost' (using password: YES) (uncaught exception) at /srv/inta/current/vendor/doctrine-dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php line 36 [] []
What I've tried so far
The weird thing is that I actually have access using the root user, and the provided password. Logging in as root via the console works great.
I'm using the following parameters.yml file located in app/config/
parameters:
database_driver: pdo_mysql
database_host: localhost
database_port: ~
database_name: int_apartments
database_user: root
database_password: pw goes here
mailer_transport: smtp
mailer_host: localhost
mailer_user: ~
mailer_password: ~
locale: en
secret: ThisTokenIsNotSoSecretChangeIt
As you can see, it is quite standard with only the name of the db, user and password changed.
In my config.yml located in app/config (the relevant portions)
imports:
- { resource: security.yml }
- { resource: parameters.yml }
...
doctrine:
dbal:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
dbname: int_apartments
orm:
auto_generate_proxy_classes: %kernel.debug%
auto_mapping: true
mappings:
StofDoctrineExtensionsBundle: false
Now, I wanted to start at "step 1" and verify that the parameters.yml file is actually being imported, so I changed the host to "localhos" or the user to "tom" or whatever and the error message located in app/logs/prod.log stays exact as is - the location doesn't change and the user doesn't change.
So I checked my config_prod.yml located in app/config
imports:
- { resource: config.yml }
#doctrine:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
...and everything seems standard!
Summary of what's going on
So here is the quick version.
Authentication error exists for root#localhost
Verified my authentication creditials by logging in as that user via the console
Want to check if the parameters.yml file is being loaded
Changed some values - none affected the error message
(small)Edit:
What I actually want to do is to connect to the DB as a completely different user with a different password. Even when I enter different credentials into my parameters.yml file, doctrine still spits out the "root#localhost" error.
Ideas?
Silly mistake, seems due to a bad user/group/owner configuration on the server.
the app/cache directory is owned by "root", but when I run
app/console cache:clear --env=prod --no-debug
I am running as another user (not root). So there were issues in clearing the cache and doctrine seems to have been using a very old configuration located in the cache files.
Lessons learned:
Always try running as root (as a last resort)
Use a properly configured web server to avoid ownership issues
I solved my problem by renaming the prod folder i uploaded to prod_old because the system could not delete the folder for some reason.