Giver deployment issues - solidity

my software started saying my giver is not deployed, but it is deployed, obviously.
I guess this is about endpoints. Could you please help me wrap my head around what's to be fixed? THx a lot!
everdev contract deploy FarmingCalculator.abi.json -n dev -s dev_giver -v 250000000 -d _randomNonce:13
Configuration
Network: dev (eri01.net.everos.dev, rbx01.net.everos.dev, gra01.net.everos.dev)
Signer: dev_giver (public 191fb7466066419bb44da39d58c2c1161255da87453b0447dc2500a4927b57db)
Address: 0:9781a951a6c8c8961e7f5467e308c6983189c3746557cf5cc5739d5a2dbc39a2 (calculated from TVC and signer public)
Error: Giver 0:5236a17bd571ad5cf8acda004158e94f97d61833c3a11484e23dd9e97374f9f7 has no code deployed."

try this:
everdev network credentials dev --project ""Project Id""
(Don't forget to install the latest everdev version and you need to be athorized)

Related

how to use auth and authToken together with artifactory?

I have problem authenticating artifactory on both local and CI environments with .npmrc:
local config works only with _authToken
#render:registry=https://artifactory.corpname.io/artifactory/api/npm/npm/
//artifactory.corpname.io/artifactory/api/npm/npm/:_authToken=${JFROG_AUTH_TOKEN}
//artifactory.corpname.io/artifactory/api/npm/npm/:always-auth=true
//artifactory.corpname.io/artifactory/api/npm/npm/:email=myemail#corpname.io
CI works only with _auth
#render:registry=https://artifactory.corpname.io/artifactory/api/npm/npm/
//artifactory.corpname.io/artifactory/api/npm/npm/:_auth=${JFROG_AUTH_TOKEN}
//artifactory.corpname.io/artifactory/api/npm/npm/:always-auth=true
//artifactory.corpname.io/artifactory/api/npm/npm/:email=myemail#corpname.io
I've tried adding both hoping it will take whatever is compatible with env
#render:registry=https://artifactory.corpname.io/artifactory/api/npm/npm/
//artifactory.corpname.io/artifactory/api/npm/npm/:_authToken=${JFROG_AUTH_TOKEN}
//artifactory.corpname.io/artifactory/api/npm/npm/:_auth=${JFROG_AUTH_TOKEN}
//artifactory.corpname.io/artifactory/api/npm/npm/:always-auth=true
//artifactory.corpname.io/artifactory/api/npm/npm/:email=myemail#corpname.io
this didn't help, what else can I do to have consistent setup?
as far as I understood env difference is that one of them uses npm login and another one basic auth, but what is responsible for this? both envs use same node version and how do I sync the setup is unclear for me
Try something like:
curl -u %ARTIFACTORY_USER%:%ARTIFACTORY_KEY% https://artifactory.corpname.io/artifactory/api/npm/auth > ~/.npmrc
echo '#render:registry https://artifactory.corpname.io/artifactory/api/npm/components-npm' >> ~/.npmrc
npm config set registry https://artifactory.corpname.io/artifactory/api/npm/mirror-npmjs-org
npm config set #render:registry https://artifactory.corpname.io/artifactory/api/npm/components-npm
Adjust the lines for linux if needed.

kubectl versions Error: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1

I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube

How are the --network options available in podman?

I am running a virtual environment on CentOS with podman.
When I used the --net option of the podman run command, I get an error.
[user#server ~]$ podman run --net slirp4netns:port_handler=slirp4netns -p 1080:80 -d --name web nginx
Error: cannot join CNI networks if running rootless: invalid argument
Is this option unavailable?
Or is there a problem with the way the options are specified?
Please tell me solution.
I used this site as a reference for the command.
This is the configuration of the server.
[user#server ~]$ cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
[user#server ~]$ podman -v
podman version 2.0.6
The port_handler option requires Podman >= 2.1.0, which isn't released at this moment: https://github.com/containers/podman/commit/d86bae2a01cb855d5964a2a3fbdd41afe68d62c8
You can use that option if you compile Podman from its master branch.
I find this link quite helpful to see rootless communication :
https://www.redhat.com/sysadmin/container-networking-podman
https://podman.io/getting-started/network
I am not sure if you have seen this link before or even if it is helpful to you at this instance. But, in view of helping others out, I think the blog post quotes the following helpful statements:
Note: All podman network commands are for rootfull containers only.
Technically, the container itself does not have an IP address, because without root privileges, network device association cannot be achieved
When using Podman as a rootless user, the network is setup automatically. The container itself does not have an IP Address, because without root privileges, network association is not allowed. You will also see some other limitations.

Docker-machine : ca.pem not found

Here i am creating a test machine(dev) using the docker machine.
$ docker-machine create -d virtualbox dev
Creating CA: C:\Users\xxx\.docker\machine\certs\ca.pem
Creating client certificate: C:\Users\xxx\.docker\machine\certs\cert.pem
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
The vm gets created and runs with out flaws.
And here is the error when i run the following command:
$ docker-machine env dev
open C:\Users\xxx\.docker\machine\machines\dev\ca.pem: The system cannot fin
d the file specified.
I have no idea how to deal with this problem. Tried restarting boot2docker.
You should try using docker-machine regenerate-certs dev. The problem i think is that somehow your .pem file got deleted or was not created. I had the same issue and regenerating the certs fixed the problem (reboot did not help btw).
I guess you are getting Docker-machine : ca.pem not found error even when you use docker info or any command with docker
Try this command: docker-machine env -u
output will be similar to:
unset DOCKER_TLS_VERIFY
unset DOCKER_HOST
unset DOCKER_CERT_PATH
unset DOCKER_MACHINE_NAME
# Run this command to configure your shell:
# eval $(docker-machine env -u)
now enter eval $(docker-machine env -u)
this should do the work. Try docker info to be sure finally.
I was getting the exact same error. It turned out to be the Cisco AnyConnect client affecting my networking settings. It's not enough to quit AnyConnect, you have to reboot your machine to restore your settings.
If someone knows more about how AnyConnect is affecting things and if there are solutions better than rebooting, I'd love to hear about it!
Copy certificates from "C:\Users\xxx\.docker\machine\certs"
Paste certificates to "C:\Users\xxx\.docker\machine\machines\dev"
NOTE: This error was on Windows 10 Docker
Here was my error:
#user ➜ git-repo git(users/user/dev) ✗ docker
unable to resolve docker endpoint: open C:\Users\user\.docker\ca.pem: The system cannot find the file specified.
Here is the link to the shell file I used to recreate the certificates I named it generate_docker_cert.sh:
https://gist.github.com/bradrydzewski/a6090115b3fecfc25280
So I went to that directory that the error output:
cd C:\Users\user\.docker\
Created that file:
notepad generate_docker_cert.sh
Copied the values from the link into there and saved.
Then ran that .sh file:
.\generate_docker_cert.sh
Then the docker command worked:
#user ➜ git-repo git(users/user/dev) ✗ docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
...

Running Redis on Travis CI

I just included a Redis Store in my Express application and got it to work.
I wanted to include this Redis Store in Travis CI for my code to keep working there. I read in the Travis Documentation that it is possible to start Redis, with the factory settings.
In my project, I don't use the factory settings, I wrote my own redis.conf file which specifies the port and the password.
So I added the following line to my .travis.yml file:
services:
- redis-server --port 6380 --requirepass 'secret'
But this returns the following on Travis CI:
$ sudo service redis-server\ --port\ 6380\ --requirepass\ \'secret\' start
redis-server --port 6380 --requirepass 'secret': unrecognized service
Is there any way to fix this?
If you want to customize the option for Redis on Travis CI, I'd suggest not using the services section, but rather do this:
before_script: sudo redis-server /etc/redis/redis.conf --port 6380 --requirepass 'secret'
The services section runs services using their init/upstart scripts, which may not support the options you've added in there. The command is also escaped for security reasons, hence the documentation only hinting that you can list normal service names in that section.