Why do I have to delete my build and dist files to successfully upload a new version of a package to Pypi? - pypi

I get a 400 client error file already exists if all I do is change the version number in setup.py
My program is just a "hello world".
Some friends told me to delete build and dist and then it works but I couldn't find anywhere else on the internet why this is. All the tutorials I came across kind of just hand waved updating or said changing version number with bump version would allow this.

I'm assuming you're using twine upload dist/*. By default, this will tell twine to upload everything in the dist directory. However, if you're already uploaded some of these distributions, you won't be able to upload them again.
Instead, you can either specify the exact distribution you're trying to upload:
twine upload dist/yourproject-1.0.0.tar.gz
Or use the --skip-existing flag:
twine upload --skip-existing dist/*

Related

The issue with downloading files with Selenium with Jenkins

Testers may have this issue for sure.
Assume that we have a testcase that should be automated. And it has a step to download a file from the webpage by clicking a link and it will be downloaded to our local machine’s download folder. As the next step it should be verified that the file is downloaded.
So in a local machine this can be handled easily by using the download paths and all. But the matter that I have is this exact same testcase is getting fail in Jenkins (cloud run). It returns a null value because the download directory cannot be found in Jenkins.
Do you guys know what kind of solution that can we take for this? I heard something using API request to download the file. Yes this file is also getting downloaded by a GET request with parameters. But I don’t know how to perform that.
Thanks for your time.
I tried the bellow options.
Changing the directory to Windows and Linux as per the documents
By using jenkins home directory
What I want to do?
to verify that the file is downloaded
Read the file and check with the db (This has existing methods)

Metaplex-master on github only has Readme file

I am trying to set up a Solana candy machine. I am using the Hasplips Metaplex-master but it only has one readme file. Its supposed to have a js folder, some .JSON files and more. Can any send me a link to the correct Metaplex-master for the candy machine? I can only find the Metaplex contain a readme file.
When I extracted the files all I found was a read me file. I created a js folder myself and tried to run some yarn commands in the Visual Studio code terminal but I need the other .json files that were supposed to be there to execute the commands.
You are using a very old guide. The js sdk has been deprecated and removed from that repo for months now.
It is way easier to create a candy machine with sugar, e.g. following this guide https://docs.metaplex.com/programs/candy-machine/how-to-guides/my-first-candy-machine-part1

How do I specify JRE when creating a Bamboo sidekick agent for their per-build-container plug-in?

Trying to get the sidekick image built and having some issues. Is there any documentation other than the README.md file?
My current problem is with getting the JRE requirement working but there are others. The page says "download Oracle JRE and place it inside the working directory. Optionally if you have a company wide distribution url, use that one at a later step." and the help says "Java (JRE) download url or path inside working directory". Have not been able to get this to work.
I went to the JRE link provided and was presented with options to download a rpm file or a tar.gz file. Which is expected (was unable to get either one working)?
It says to place the file in the "working directory" but not sure where exactly. Tried in sidekick folder and in sidekick/jre both without success no matter what I used after the -j command. Is this just the path or should the filename be included as well? Can I get an example?
I'm running this script using my login but noticed the output folder is being created with root user and group. I see no indication that this should be run with sudo. What is the correct way to run this script?
Using debug, I see the function "download if not cached". Can I save these files (JRE, Bamboo jar file, etc.) somewhere so I don't have to worry about downloading them? If so, where should they go? Looks like I might have a problem with the wget to d/l the jar file so would like to just be able to place all these in a folder and be done with it.
It looks like the major problem is the script didn't clean up after itself if it fails. The issue was the first time it failed then that caused subsequent issues as the output folder was already there. Removing this directory between each attempt help.
As for the correct syntax for the -j JRE option I manually downloaded the JRE and placed in a folder called per-build-container/sidekick/stuff/. For the command line it is not just the path but the file name as well (the tar.gz and not the RPM). For my case it was
-j stuff/jre-8u251-linux-x64.tar.gz
Note I also ran the script as sudo. Wasn't stated but seemed to work OK.
Another issue I ran into was the download of the agent jar file. There is a redirect in the wget file that was not working for us. I ended up editing the script and replacing the Altassian based url with the redirected one.
This addresses all the issues I ran into with the initial question.

how to upload and then download files from application using Glassfish 4.1.1 and jelastic?

I was trying to upload and download files at application level using jelastic server in cloud and I have some issues during downloading the uploaded files
to upload the files I use:
File folder = new File(".." + File.separator + "customFolder");
and the files are uploaded correctly inside of:
/opt/GlassFish/glassfish/domains/customDomain/customFolder/
and I can see the files using jelastic dashboard and ssh
but, if I try to download them through application I have a 404 error
using this kind of approach
Link
I try to use the instructions posted here
Cann't get file from classpath (using NIO2)
but this doesn't work for me. Also I try to use some paths also posted in jelastic documentation (https://jelastic.zendesk.com/hc/en-us/community/posts/206122066-Uploading-Files-to-a-Specific-Folder) but noticed there is no a clear explanation for Glassfish.
Also I figure out that the files are in different locations inside of jelastic application folders
these are the different locations that I have found and tried to use to downloading the files (I changed the access permissions, also without making a successful download):
/opt/GlassFish/glassfish/domains/customDomain/customFolder/
/opt/repo/versions/4.1.1/glassfish/domains/customDomain/customFolder/
/opt/shared/glassfish/domains/customDomain/customFolder/
so my question is
what is the correct path to download the files or should I change the upload path?
an example using java code returning the string with the path for download would be appreciated
I'm using Glassfish 4.1.1 for the application

sqlite3_analyzer not working in Ubuntu missing shared object file

I am learning more about sqlite3 and am trying to use the sqlite3_analyzer to view a bunch of data about my data. The problem is when I download the sqlite-analyzer-linux-x86-3071502.zip from https://www.sqlite.org/download.html and unzip this package and THEN try to run the program I receive THIS error: ./sqlite3_analyzer: error while loading shared libraries: libtcl8.6.so: cannot open shared object file: No such file or directory
Does anyone know where to get this libtcl8.6.so file? Does anyone know how to install this after obtaining it?
Install the package tcl8.6, or download the analyzer source code and recompile it with the Tcl version in your distribution.
I ended up downloading an older version of sqlite3_analyzer from a third party website (do a search for sqlite-analyzer-linux-x86) that worked without the dependency. I won't post links as I can't ensure that they'll be available and serve the same file as I downloaded.
If you decide to do that, be sure to check the file for viruses on http://virustotal.com! Can't trust these Chinese file hostings ;)