Is there a way to download this homestead virtualbox file? - amazon-s3

Short Question
I am currently trying to install Homestead. The following url is choking me to death
https://atlas.hashicorp.com/laravel/boxes/homestead/versions/2.1.0/providers/virtualbox.box
Is there any other way to download the above file ?
Details:
The above file download is redirected to signed S3 url. Out of great wisdom, there is a timeout of 60 [perhaps in second] .. so after downloading 10% or , the download fails.
Have a look at the following :
vagrant init laravel/homestead; vagrant up --provider virtualbox
...
==> default: Adding box 'laravel/homestead' (v2.1.0) for provider: virtualbox
default: Downloading: https://atlas.hashicorp.com/laravel/boxes/homestead/versions/2.1.0/providers/virtualbox.box
==> default: Box download is resuming from prior download progress
default:
An error occurred while downloading the remote file.
The error message, if any, is reproduced below.
Please fix this error and try again.
I have tried downloading the file using other means, e.g through the browser or through curl. The call to the above url results in a signed S3 link, which in all its glory has a timeout as you can see in the I get the following url :: .
https://hc-prod-storagelocker.s3.amazonaws.com/boxes/5b64bd3b-eb87-4af4-9b2d-1c1560efca67?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJBKZ6DNPERBCPYKQ%2F20170613%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170613T022000Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host&X-Amz-Signature=0d13db989138a93b4ab82d18c4768b141e837d0c834654f73f5394e1cd04ce0e
Even though I am not sitting under a rock, but they clearly think that I should have better speed, hence such a short expiry of the signed url.

Due to slow connection, the download kept breaking. Even though vagrant says:
Box download is resuming from prior download progress
there was no indication from progress percent that it was actually resuming.
I kept on trying, by running it over and over again .. and eventually it accounted for previous download progress.

Related

Calling a Jenkins job from a Codefresh pipeline fails with: x509: failed to load system roots and no roots provided

I have a Jenkins job which I would like to invoke from my Codefresh pipeline.
Using the following example from the Codefresh docs, I have my Codefresh pipeline configured and ready:
https://codefresh.io/docs/docs/integrations/jenkins-integration/#calling-jenkins-jobs-from-codefresh-pipelines
The resulting build runs with the following output:
Pulling image codefresh/cf-run-jenkins-job:latest
Pulled layer '1160f4abea84'
Pulled layer '6df1582e0e0e'
Digest: sha256:a95b23c24b51d5fc1705731f7d18c5134590b4bc61b91dcf5a878faf2aec60b3
Status: Downloaded newer image for codefresh/cf-run-jenkins-job:latest
INFO[0000] Going to trigger <jenkins_job_name> job on https://<jenkins_host>:8443
ERRO[0000] Post https://<jenkins_host>:8443/job/<jenkins_job_name>/build: x509: failed to load system roots and no roots provided
Successfully ran freestyle step: Triggering Jenkins Job
Reading environment variable exporting file contents.
Reading environment variable exporting file contents.
As you can see, the build fails to successfully trigger the Jenkins job.
After some research in the Internet I came to conclusion that this is an SSL certificate issue.
But I have no idea how to proceed from here on. What exactly is missing and where it should be configured. I would really appreciate any help here.
Do you know that kind of SSL configuration your Jenkins server has? Is it mutual authentication or just a server-side certificate? Is it self-signed or not?
Have you tried to call the Jenkins API on your own (outside of Codefresh) and SSL works fine?
Also I would suggest you open a support ticket (from the top right menu in the Codefresh UI) and make sure to mention the URL of the build that has this issue.

GitHub Desktop / Git Shell: SSL read: error:00000000:lib(0):func(0):reason(0), errno 10054

Since this morning, when I try to sync from GitHub Desktop after creating a commit, I sometimes get the message:
When I try to push the commit in Git Shell, I get:
SSL read: error:00000000:lib(0):func(0):reason(0), errno 10054
What could be the issue?
It takes around one or two minute between the time I try to push and the time I get the error message (in GitHub Desktop or Git Shell), so I suspect some connection issues on GitHub side (I have checked the robustness of the connection on my side), but I find the message sibylline.
I use GitHub Desktop with Windows 7 SP1 x64 Ultimate.
Change URL from https to http, for me it worked. There must be problem with ssl tunnel. But currently you can start it by switching to http.
I got here because I was looking for a solution, too. I have got an SSL read: error.... (identical to yours) returned as well when I was trying to push a commit. The reason might have been some big files in the commit that you were trying to push to github.
For me, I followed steps https://git-lfs.github.com provides and it seems to solve my problem.

Gitlab-shell check failed 302 unable to clone/push/pull

A lot of people have ask this question but it was 2 month ago with another gitlab version,
I'm using gitlab 5.2 in a fresh debian 7.0 serveur
everything looks Okay on the website but when I run /home/git/gitlab-shell/bin/check I've got this error :
Check GitLab API access: FAILED. code: 302
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK:
I'm running on a custum ssh port but I'm able to connect.
When pushing I've got this error:
git push -vu origin master
Pushing to ssh://git#apps.ndd.fr:2232/Users/test.git
fatal: The remote end hung up unexpectedly
Thanks for your answers!
I've just got the same error and go look onto the code.
The thing I've found the gitlab_net module going for answer at #{host}/check (gitlab-shell/lib/gitlab_net.rb)
host method is defined as "#{config.gitlab_url}/api/v3/internal", and at the same time config.gitlab_url defined in ./gitlab-shell/config.yml "Should end with a slash" (c) So my web server just returns 302 on a request to remove double slashes.
FYI: That fail is about API and not about web service. So it's non-critical in many cases anyway.
I think it's a minor bug in code and there is a close issue to this: https://github.com/gitlabhq/gitlabhq/issues/3483

TortoiseSvn suddenly raises "OPTIONS SSL handshake failed: SSL error: sslv3 alert illegal parameter" on Windows 7

A client of mine has trouble with TortoiseSVN. It was working fine till now. She did her last commit on Thursday Feb. 23. 2013 But now she gets the following error:
OPTIONS SSL handshake failed: SSL error: sslv3 alert illegal parameter
She cannot access the Repository anymore. No update, no checkout, no log, etc.
It is difficult to locate the problem. It shows up with tsvn 1.7.4 and 1.7.11
She cannot use tsvn with the ProjectRepository
She cannot use svn commandline client (http://www.sliksvn.com/en/download) with the ProjectRepository
She can use tsvn with a PlaygroundRepository on another Server
She can access ProjectRepository with IE and with Firefox
She can access ProjectRepository with SmartSvn
I can use tsvn in their network with the ProjectServer from my macbook with parallels.
I entirely uninstalled/reinstalled tsvn -no success
I deleted %appdata%\Roaming\Subversion -no success
As an act of desperation, I installed smartsvn which makes her work again, but this cannot be the solution.
It must be the combination of tsvn, her machine and the ProjectRepository/Server. Her Machine works with PlaygroundRepository on another server.
Any Idea is highly welcome. In paticular due to the fact that it worked last week with tsvn 1.7.4.
So the only thing which might have changes is some updates on the windows box.
Check for the installation of MS012-006 on the client. That hot fix broke a lot of things. Roll it back and see if connects are successful.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.