Cannot extract jester dataset in windows 10 - gzip

I know that this topic already appears in other posts but I'm not able to solve it and your help would be very much appreciated.
I'm trying to extract the Jester V1 Hand Gesture Recognition dataset but, when you download it, it comes in a very strange format. I've tried the command cat 20bn-jester-v1-?? | tar zx in the Windows Powershell which made my computer run something (don't know what) for several hours (I know that it was running because my computer was slow) but nothing changed in the file. And in the command line (this second option gave me an error). I've also tried to extract them with 7zip and was able to extract the first file but I couldn't accomplish it with the rest.
Please help, I'm using Windows 10 and have already installed 7zip and here's a screenshot of the files.
Thank you so much!

This page says to cat 20bn-jester-v1-?? | tar zx to extract the videos. However it also says that those 23 pieces are about 1 GB each. Your screen shot shows them as less than 1 MB each. Perhaps something went wrong further upstream.

A solution that I found to this problem is to install the Linux terminal on windows with ubuntu.

Related

Chrome OS bug - file select dialog (for uploads) 'stuck'

The best way to sum up the issue would be with a screenshot, but unfortunately my screenshots auto-save to downloads and so I can't upload them. What's happening is that when I enter the 'file select' dialog in trying to upload a file, it's automatically in the 'drive' folder and won't move to any other folder. I've tried restarting and resetting my machine, tried the upload process on a bunch of different platforms, tried using the other user account on my machine, tried updating my software, but none of these have made any difference. I can get into my downloads folder and open files from it fine outside of this context, and I can workaround by using the drag-and-drop to upload on platforms which have this feature, but otherwise I'm stuck.
I've googled extensively to see if anyone else is having this issue and have found this thread: https://productforums.google.com/forum/#!topic/chromebook-central/d7g9EEDsr8w but there's no helpful solution there (recommended a powerwash but the asker has already done this several times). I've also tried to find a solution with the help of my (programmer) employer, but no luck so he recommended asking here. It seems like it wouldn't be a hardware issue when I can still access the folder outside of this specific function, but if it were a problem with the running system it seems it would be happening across the board and therefore show up more in a google? If anyone has any suggestions I'd be very grateful as it's getting quite tiresome having to drag and drop things into Facebook messages to get them uploaded! The machine is less than a year old so if I can't find any solution I'll see about getting it replaced under warranty. Thanks in advance for any help, and please let me know if there's any key info I've left out!
Machine: Samsung Chromebook XE303C12
OS: Version 38.0.2125.110

Beaglebone black ( Debian Image 2014-05-14) goes to sleep after 10 minutes

As the title say it the board goes to sleep after 10 minutes. All I want is to SSH into the board (no keyboard/mouse or monitor attached). After googling for a good period of time all I found are some settings for X (Gui) I have also try the following command:
setterm -blank 0 -powersave off -append
It gives me the following error
setterm: cannot (un)set powersave mode: Inappropriate ioctl for device
How can I fiddle or completely disable this power management. I am pretty sure is not a scheduled task or a process but rather the kernel itself and a setting I couldn't found.
Thanks in advance!
I'm not entirely sure, but if your /sys/power/state file has a state, i.e. some string is returned, try echo -n /sys/power/state to make the file blank.
In my case I left the instalation SD inserted into the board. It wasnt power management but rather the disto linux would install over and over again - therefore the approx 10 minutes interval.
I am pretty sure the answer with the PM settings file is correct however it wasn't my case.
Regards,
DAN

Autoingestion.class has stopped creating report files

A few month ago i wrote some scripts to fetch the itunes connect sales reports automatically. Today i noticed that the scripts has stopped working correctly, so i searched for the problem.
Obviously the Autoingestion Tool from Apple (Autoingestion.class) has stopped to create the expected output files...
Usage Example:
java Autoingestion user *pw* vendor Sales Daily Summary 20130401
Syntax is still correct regarding http://www.apple.com/itunesnews/docs/AppStoreReportingInstructions.pdf
The tool runs fine without any errors. Just the expected output file is missing :(.
To except problems with java - i tested the tool on different platforms with different JVM versions.
Is anyone else experiencing this problem?
I had the same issue, but per your own comment, I tried it on another box with an up to date JRE, I believe, and it worked. (I'd tried it on my dev box, but running it on the live box worked. Makes sense.)
Just adding, this is the output we want to see:
$ java Autoingestion autoingestion.properties <vendor> Sales Daily Summary 20130801
S_D_<vendor>_20130801.txt.gz
File Downloaded Successfully
I'm inclined to think there should always be output from running Autoingestion. I actually had the wrong credentials when I ran it at first, but it couldn't even tell me that, in the wrong environment.

why server.transfer process slow in vb.net?

I need your help, I have a problem with server.transfer code in vb.net, it runs so slow..
My Question:
Why does it run slowly (take 5 minutes to move between web pages (.aspx))?
What should i check for this trouble?
Is it because operating system? Im use windows 7, before i used windows XP there is no problem like this...
is server.transfer related to database connection (not sure)? I use mysql (XAMPP packages).
Or may be because other configuration that i miss out in windows seven.
FYI: i try in several web browser same result(loading 5 minutes)..
Thank every one that answer my question, thank you very much!
One thing I've found on this is that it can have to do with the status code the transferred page returns. If it returns a 500 error, it can make your server transfer run upwards of five minutes.
One way to test this, if you can, is to run the transferred page in isolation and generate any of the information being transferred on the other side to see if any errors are generated.
It took me a day to figure this out. Hopefully it helps someone else.

Force a Samba process to close a file

Is there a way to force a Samba process to close a given file without killing it?
Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.
edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?
edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.
This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):
See which computer is using the file (i.e. lsof|grep -i <file_name>)
Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
If no luck, have the user exit any important network programs
Kill the user's Samba process from linux (i.e. kill -9 <pid>)
I wish there was a better way!
I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.
After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.
I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.
If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.
The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.
This is probably answered here: How to close a file descriptor from another process in unix systems
At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.
If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.
Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...
The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?
Samba provides commands for viewing open files and closing them.
To list all open files:
net rpc file -U ADadmin%password
Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep.
Once you've found a file you want to close, copy its file id number and use this command:
net rpc file close fileid -U ADadmin%password
I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script:
#!/bin/bash
PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$
for PID in $PIDS_TO_CLOSE; do
kill $PID
done
It takes a single argument, the paths to close:
smbclose /media/drive
Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though.
It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.