Can you upload just app metadata using Transporter? - app-store-connect

I've got an app already in the store and need to add additional IAPs using Transporter. I've used lookupMetadata to get the metadata.xml file. I'd like to edit this file then re-upload without having to upload the app again. Looking at the docs, the upload mode states you have to upload the app package:
In upload mode, you must specify these command-line options:
-m upload
-f or -assetFile <.ipa | .pkg> for macOS, Linux, and Windows uploads (for macOS notarization, use -assetFile <.dmg | .pkg | .zip>)
-assetDescriptionAppStoreInfo.plist (-assetDescription is required for Linux and Windows uploads)
-u username
-p password
-k kilobits_per_second *
Has anyone come across this before and is there a solution to uploading just the app metadata.

It turns out that you can upload the itmsp file with just the metadata.xml in it and it works fine.
I validated with this:
iTMSTransporter -m verify -f <path to itmsp file> -u <email> -p <password> -v eXtreme
And uploaded with this:
iTMSTransporter -m upload -u <email> -p <password> -f <path to itmsp file> -k 100000 -v eXtreme

Related

How to save output of python-swiftclient to file when dowloading a directory?

Sometimes I get errors when I download files from a cloud with python-swiftclient, like this one:
Error downloading object 'uploads/1/image.png': Object GET failed: https://orbit.brightbox.com/v1/acc-12345/uploads/1/image.png 500 Internal Error b'An error occurred'
To search for the all errors and re-download failed files I would want to save output of the swift command to a file
I tried to do the following ways:
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads 2>&1 | tee uploads.log
# and
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads > uploads.log
But this didn't work. man swift describes -o option
For a single object download, you may use the -o [--output]
option to redirect the output to a specific file or if "-" then just redirect to stdout or with --no-download actually not to write anything to disk.
but when I try to download a directory with -o option if fails with
-o option only allowed for single file downloads
How can I save log to a file when I download a directory with swift CLI?
Actually redirecting output to a file works with swift-client:
swift-cli -A https://orbit.brightbox.com/v1/acc-12345 \
-U user -K secret download uploads > uploads.log
I was confused because after I started the command above, in another terminal window I did
tail -f uploads.log
But it didn't give me any output (like I was seeing when I was running the download command without redirection).
Seems like that swift-client writes to a file in batches and I needed to wait about a minute until tail -f dumps into the console a hundred of lines like this
uploads/documents/1/image.png [auth 0.000s, headers 0.390s, total 14.361s, 0.034 MB/s]

Upload via scp gives "Permisssion denied" on Karaf ssh console

The scp command to get a file from the Karaf directories via Karaf ssh console works well :
scp -P 8101 karaf#localhost:/deploy/README
(after I have entered the password)
But the reverse operation to upload a file fails with a "Permission denied" error :
scp README -v -P 8101 karaf#localhost:/deploy/
I tried to locally remove the file first, same error. I gave 777 on the "deploy" directory, and also tried with a new test directory.
Where can it come from ?
Thanks
Arnaud
Found it : the -P option cannot be anywhere in the line, it has to be before the local and remote filenames

Download (all) files via FTP with explicit TLS/SSL encryption

I've been trying to download files from FTP with explicit TLS/SSL encryption from one server to another using Debian. I tried a lot of commands like ftp and wget but none of them worked and said Login is incorrect. I searched whole Stack Overflow and Google.
I tried ftp and wget like this:
wget -m --user=username --password=password ftp://ip
and
ftp user#ip
Thanks in advance.
wget must be version 1.18 or above. The following was tested on 1.19.1
wget -r --level=5 -m --no-remove-listing --reject "index.html" -c --progress=dot -N --secure-protocol=auto --no-proxy --no-passive-ftp --ftp-user=XXXXX --ftp-password=YYYYY --no-check-certificate ftps://ZZZZZZZZ.com:21
Here is the link on how to build wget http://www.linuxfromscratch.org/blfs/view/8.1/basicnet/wget.html

Setting ssh public keys on Docker image

I setup a Docker image that supports ssh. No problem, lots of examples. However, most examples show setting a password using passwd. I want to distribute my image. Having a fixed password, especially to root, seems like a gaping security hole. Better, to me, is to setup the image with root having no password. When a user gets the image they would then copy their public ssh file to the image /root/.ssh/authorized_keys file.
Is there a recommended way to do this?
Provide a Dockerfile that builds on my image with an ADD command
that user can edit?
Provide a shell script that runs something like "cat ~/.ssh/authorized_keys | docker run -i sh -c 'cat > root/.ssh/authorized_keys"?
What about generating a private key and display it to the user?
I use this snippet as part of the entrypoint script for an image:
KEYGEN=/usr/bin/ssh-keygen
KEYFILE=/root/.ssh/id_rsa
if [ ! -f $KEYFILE ]; then
$KEYGEN -q -t rsa -N "" -f $KEYFILE
cat $KEYFILE.pub >> /root/.ssh/authorized_keys
fi
echo "== Use this private key to log in =="
cat $KEYFILE

set environment variable SSH_ASKPASS or askpass in sudoers, resp

I'm trying to login to a ssh server and to execute something like:
ssh user#domain.com 'sudo echo "foobar"'
Unfortunately I'm getting an error:
sudo: no tty present and no askpass program specified
Google told me to either set the environment variable SSH_ASKPASS or to set askpass in the sudoers file. My remote machine is running on Debian 6 and I've installed the packages ssh-askpass and ssh-askpass-gnome and my sudoers file looks like this:
Defaults env_reset
Defaults askpass=/usr/bin/ssh-askpass
# User privilege specification
root ALL=(ALL) ALL
user ALL=(ALL) ALL
Can someone tell what I'm doing wrong and how to do it better.
There are two ways to get rid of this error message. The easy way is to provide a pseudo terminal for the remote sudo process. You can do this with the option -t:
ssh -t user#domain.com 'sudo echo "foobar"'
Rather than allocating a TTY, or setting a password that can be seen in the command line, do something like this.
Create a shell file that echo's out your password like:
#!/bin/bash
echo "mypassword"
then copy that to the node you want using scp like this:
scp SudoPass.sh somesystem:~/bin
Then when you ssh do the following:
ssh somesystem "export SUDO_ASKPASS=~/bin/SudoPass.sh;sudo -A command -parameter"
Another way is to run sudo -S in order to "Write the prompt to the standard error and read the password from the standard input instead of using the terminal device" (according to man) together with cat:
cat | ssh user#domain.com 'sudo -S echo "foobar"'
Just input the password when being prompted to.
One advantage is that you can redirect the output of the remote command to a file without "[sudo] password for …" in it:
cat | ssh user#domain.com 'sudo -S tar c --one-file-system /' > backup.tar
Defaults askpass=/usr/bin/ssh-askpass
ssh-askpass requires X server, so instead of providing a terminal (via -t, as suggested by nosid), you may forward X connection via -X:
ssh -X user#domain.com 'sudo echo "foobar"'
However, according to current documentation, askpass is set in sudo.conf as Path, not in sudoers.
How about adding this in the sudoers file:
user ALL=(ALL) NOPASSWD: ALL