In WSL enviroment, what is the difference between disk c and other disk? - windows-subsystem-for-linux

I'm using apache2+php7.2 on WSL. When I tried to change the uploaded_tmp_dir in /etc/php/7.2/apache2/php.ini into an path in /mnt/d, I got error like this:
PHP Warning: move_uploaded_file(): Operation not permitted in
strace info:
247 unlink("./uploaded_files/avatar \346\227\240\345\220\215.jpg") = 0
247 rename("/mnt/d/contents/test/tmp/phpe68vM3", "uploaded_files/avatar \345\246\271\345\255\220\343\200\202\343\200\202.jpg") = 0
247 umask(077) = 022
247 umask(022) = 077
247 chmod("uploaded_files/avatar \345\246\271\345\255\220\343\200\202\343\200\202.jpg", 0644) = -1 EPERM (Operation not permitted)
However, if I set an path in /mnt/c, it's Ok.
I think there are some differences between /mnt/c and /mnt/d.
Could someone please tell me what to do with this?

Related

Google colab unable to work with hdf5 files

I have 4 hdf5 files in my drive. While using colab, db=h5py.File(path_to_file, "r") works sometimes and doesn't rest of the time. While writing the hdf5 file, I have ensured that I closed it after writing. Say File1 works on notebook_#1, when I try to use it on notebook_#2 it works sometimes, and doesn't other times. When I run it again on notebook_#1 it may work or maynot.
Probably size is not a matter because my 4 files are 32GB and others 4GB and mostly the problem is with 4GB files.
The hdf5 files were generated using colab itself. The error that I get is:
OSError: Unable to open file (file read failed: time = Tue May 19 12:58:36 2020
, filename = '/content/drive/My Drive/Hrushi/dl4cv/hdf5_files/train.hdf5', file descriptor = 61, errno = 5, error message = 'Input/output error', buf = 0x7ffc437c4c20, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0
or
/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (bad object header version number)
Would be grateful for any help, thanks in advance.
Reading directly from Google Drive can cause problems.
Try copying it to local directory e.g. /content/ first.

How can ejecting an audio CD by using cdaudio's cd_eject () method produce errno #5?

I'm trying to playback an audio CD from my app by using the cdaudio library + a USB DVD drive attached to a Raspi 3B. Trying to eject the CD after playback always makes me end up with errno #5. This is my code:
void sound::Eject ()
{
struct disc_status cd_stat;
if (sound::current_sound_source == CD) {
sound::Stop ();
cd_poll (sound::cd_drive_handler, &cd_stat);
if (sound::is_cd_stopped && cd_stat.status_present == 1) {
if ((cd_eject (sound::cd_drive_handler)) < 0) cout << "Ejecting CD failed! Error: " << strerror (errno) << endl;
}
}
}
This is the output I get:
ioctl returned -1
Ejecting CD failed! Error: Input/output error
When trying to eject the CD, I hear a noise in the drive, as if it was about to access the CD, for about half a second. This is the drive I'm using:
pi#autoradio:~ $ ls -al /dev/sr*
brw-rw----+ 1 root cdrom 11, 0 Mai 1 21:38 /dev/sr0
Ejecting the CD from the command line (eject /dev/sr0), does work, though.
Does anybody know what may cause this error? Thank you.
UPDATE #1: I gave cdcd (the command-line tool for audio CDs) a try, and I could reproduce the error there, too (even under sudo):
cdcd> eject
ioctl returned -1
UPDATE #2: I found out that cdaudio calls ioctl with the CDAUDIO_EJECT command (see sourcecode), but I can't find such a command anywhere in the linux/cdrom.h file. According to one of the developers of the cdaudio library, this is just an alias for CDROMEJECT and not a bug.
UPDATE #3: strace give me this output. I hope this is sufficient:
ioctl(3, CDROM_DISC_STATUS, 0) = 100
ioctl(3, CDROMSUBCHNL, 0x7e93e308) = 0
ioctl(3, CDROMEJECT, 0x1) = -1 EIO (Input/output error)
write(1, "ioctl returned -1\n", 18) = 18
In contrast, when tracing the eject utility, I get something slightly different:
geteuid32() = 1000
open("/dev/sr0", O_RDWR|O_NONBLOCK) = 3
ioctl(3, CDROMEJECT, 0x1) = 0
close(3) = 0
exit_group(0) = ?
+++ exited with 0 +++
A comparison of the open () calls reveals that the cdaudio library apparently opens the drive on read-only mode (which is theoretically correct, but, on the other hand, seems to choke the eject command):
open("/dev/sr0", O_RDONLY|O_NONBLOCK) = 3
SEE ALSO: Question #26240195
OK, after a couple of weeks of studying the eject utility, I found out that at least some CD drives wouldn't accept a CDROMEJECT command via ioctl (), but require a bunch of SCSI commands. In fact, eject contains a method, which is used as a fallback in such situations: eject_scsi (). I implanted this method into cdaudio. Tests were successful. So I asked the maintainers of cdaudio for a respective patch.

Executing perl via cgid on solaris fails in 64 bit, but works in 32 bit

I have a custom build of apache's httpd version 2.2 together with perl version 5.22 on solaris 10. The httpd runs in a chroot environment, and the perl script is executed via httpd's mod_cgid. So far all was in 32 bit things worked. Now I have compiled everything in 64 bit (because another httpd module is only provided as a 64 bit binary), and now I cannot get the perl script to be executed via cgid.
The http error log contains the line
Premature end of script headers.
So I tried to execute my test script without cgid, just using perl inside the chroot, and besides some warnings it just worked fine. Here is my script, if its of any interest:
#!/local/perl5/bin/perl
print "Content-type: text/plain\n\n";
opendir(DIRHANDLE, "/");
#filenames = readdir(DIRHANDLE);
foreach $file (#filenames) { print "$file\n"; }
closedir(DIRHANDLE);
(I know its not a great one :))
The warnings were about the locale not being set, so I fixed that by adding /usr/lib/locale to the chroot. This fixed the warnings, but did not fix the original problem. So I assume this was not the root cause. Even more so, when I compared to the 32 bit build, I got the same warnings, however the script would execute fine via cgid.
Next thing I did was to trace the systemcalls via truss -f -o mylogfile.txt. The full output can be found on pastebin (32 bit truss). Here is an excerpt for the 32 bit build (line 4296 on pastebin) - note that paths are not exactly the same as on pastebin, but the observed result is the same:
28420: sigaction(SIGCLD, 0xFFBFF6A8, 0xFFBFF748) = 0
28420: chdir("/path/to/my/chroot/cgi-bin/") = 0
28420: execve("/path/to/my/chroot/cgi-bin/test.pl", 0x00183DB8, 0x00183570) argc = 3
28420: *** SUID: ruid/euid/suid = 50001 / 50001 / 50001 ***
28420: *** SGID: rgid/egid/sgid = 50001 / 50001 / 50001 ***
28420: sysinfo(SI_MACHINE, "sun4u", 257) = 6
And here is the truss output for the 64 bit build. The following is an excerpt (line 4489), note I left out some lines, denoted by [...]:
28906/21: open("/dev/urandom", O_RDONLY) = 12
[...]
28911: sigaction(SIGCLD, 0xFFFFFFFF7FFFF150, 0xFFFFFFFF7FFFF250) = 0
28911: chdir("/path/to/my/chroot/cgi-bin/") = 0
28906/21: pollsys(0xFFFFFFFF747F7080, 1, 0xFFFFFFFF747F6FA0, 0x00000000) = 1
28906/21: read(12, 0x10034BB38, 8000) = 0
28906/21: close(12) = 0
[...]
28906/21: read(10, "\0\0 pEF", 4) = 4
28906/21: kill(28911, SIGTERM) Err#3 ESRCH
28904: close(4) = 0
As Andrew Haenle noticed, I did not execute the same scripts in 32 bit vs 64 bit - at least in the truss output shown above. So here is the truss output for the failing 64 bit, where I execute the same script as in 32 bit: https://pastebin.com/Nz1jBjne
Here is some more truss output from the 64 bit build, with the additional flags -a -e -d: https://pastebin.com/4NMGD2aR
The way I interpret this is that after changing to the cgi-bin directory, cgid gets killed in 64 bit, vs. executing the script in 32 bit.
Permissions are the same, so I do not see what is the problem here. At least it explains the message from the error log - since the script is not executed, no headers are ever printed.
Anyway, I am a bit lost where to go from here. Any hints how to debug this further would be highly appreciated.
Your 32- and 64-bit tests are not the same. Per the posted truss output, the 32-bit process appears to run a Perl CGI script called cgi-test.cgi:
28415/25: stat64("/local/content/apache/myinstance.acme.com/cgi-bin/cgi-test.cgi", 0xFAFFBA18) = 0
28415/25: lstat64("/local", 0xFAFFBA18) = 0
28415/25: lstat64("/local/content", 0xFAFFBA18) = 0
28415/25: lstat64("/local/content/apache", 0xFAFFBA18) = 0
28415/25: lstat64("/local/content/apache/myinstance.acme.com", 0xFAFFBA18) = 0
28415/25: lstat64("/local/content/apache/myinstance.acme.com/cgi-bin", 0xFAFFBA18) = 0
28415/25: lstat64("/local/content/apache/myinstance.acme.com/cgi-bin/cgi-test.cgi", 0xFAFFBA18) = 0
28415/25: open("/dev/urandom", O_RDONLY) = 12
28415/25: read(12, "C3 E DB1 A03 5 kCBA8DF\r".., 64) = 64
28415/25: close(12) = 0
28415/25: open("/dev/urandom", O_RDONLY) = 12
28415/25: read(12, "E6 L _F3 uBC fA7E18AFC \".., 64) = 64
28415/25: close(12) = 0
28415/25: so_socket(PF_UNIX, SOCK_STREAM, 0, "", SOV_DEFAULT) = 12
28415/25: connect(12, 0xFAFF7AA8, 110, SOV_DEFAULT)
...
28420: sigaction(SIGCLD, 0xFFBFF6A8, 0xFFBFF748) = 0
28420: chdir("/local/content/apache/myinstance.acme.com/cgi-bin/") = 0
28420: execve("/local/content/apache/myinstance.acme.com/cgi-bin/cgi-test.cgi", 0x00183DB8, 0x00183570) argc = 3
28420: *** SUID: ruid/euid/suid = 50001 / 50001 / 50001 ***
28420: *** SGID: rgid/egid/sgid = 50001 / 50001 / 50001 ***
Note that the CGI script is run by PID 28420, where PID 28415 appears to be the "controlling" process.
But for the 64-bit process, this is the corresponding output, with the CGI script being test.pl:
28906/21: stat("/local/content/apache/myinstance.acme.com/cgi-bin/test.pl", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local/content", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local/content/apache", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local/content/apache/myinstance.acme.com", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local/content/apache/myinstance.acme.com/cgi-bin", 0xFFFFFFFF747FB3E0) = 0
28906/21: lstat("/local/content/apache/myinstance.acme.com/cgi-bin/test.pl", 0xFFFFFFFF747FB3E0) = 0
28906/21: open("/dev/urandom", O_RDONLY) = 12
28906/21: read(12, "9C `9F9899 uAED9 `1CBE11".., 64) = 64
28906/21: close(12) = 0
28906/21: open("/dev/urandom", O_RDONLY) = 12
28906/21: read(12, "C2F8FC11C31D = ! V ; O =".., 64) = 64
28906/21: close(12) = 0
28906/21: so_socket(PF_UNIX, SOCK_STREAM, 0, "", SOV_DEFAULT) = 12
28906/21: connect(12, 0xFFFFFFFF747F9560, 110, SOV_DEFAULT) = 0
...
28911: sigaction(SIGCLD, 0xFFFFFFFF7FFFF150, 0xFFFFFFFF7FFFF250) = 0
28911: chdir("/local/content/apache/myinstance.acme.com/cgi-bin/") = 0
Note the lack of an execve() call. And then PID 28911 disappears until this:
28906/21: kill(28911, SIGTERM) Err#3 ESRCH
28906/21: kill(28911, SIG#0) Err#3 ESRCH
There's not only no execve() call that actually executes the Perl script, there's no PID 28911 any more.
The problem appears to be the test.pl script. What are the permissions on the script? What user/group owns it? Does it have any ACLs attached?
First of all thank you to all who have made comments and suggestions.
Finally it turns out that it was a permissions problem. Not in the script, but in the libraries. The way my chroot environment is built is that it gets is custom permissions applied inside the chroot. Now that mechanism was not yet adapted to 64 bit. 64 bit libraries live in different subdirectories of the standard locations for 32 bit backwards compatibility. Also, my httpd process runs as a non-root user. After correcting the permissions perl worked like a charm in 64 bit.
But maybe more interesting for the general public is how I found this out: I added the -X flag to the apache startup command. That enables debug mode for apache, and finally produced the error message I needed:
env.pl: Cannot execute /lib/sparcv9/ld.so.1
cgi-test.cgi: Cannot execute /lib/sparcv9/ld.so.1
After executing - inside the chroot of course -
chmod o+x /lib/sparcv9/ld.so.1
chmod o+r /lib/sparcv9/*
my perl test scripts worked again.

Why KEYS is advised not to be used in Redis?

In Redis, it is advised not to use KEYS command. Why it is so? Is it because its time complexity is O(N) ? Or something else is the reason.
Yes.
Time complexity is very bad. Note that the N in O(N) refers to the total number of keys in the database, not the number of keys being selected by the filter pattern. So this can be a really big number for a production database.
And even worse, since only one command can run at the same time (Redis not being multi-threaded), everything else will have to wait for that KEYS to complete.
I did the following experiment to prove how dangerous KEYS command is.
While one command with KEYS runs, others KEYS commands are waiting for the time to run. One run of the KEYS command has 2 phases, first is to get the information from Redis, second is to send it to the client.
$ time src/redis-cli keys "*" | wc -l
1450832
real 0m17.943s
user 0m8.341s
$ src/redis-cli
127.0.0.1:6379> slowlog get
1) 1) (integer) 0
2) (integer) 1621437661
3) (integer) 8321405
4) 1) "keys"
2) "*"
So, it was running on Redis for 8s and then was piped to 'wc' command. Redis finished with this command in 8s but 'wc' command needed that data for 17s to complete the couting. So the memory buffers had to be there for at least 17s. Now, let's imagine clients on the network, where this data has to go to the clients as well. If we have 10 keys commands, that will run on Redis one by one, when the first one finishes and next one runs the results of the first command has to be stored in memory before the client will consume them. That all takes memory, so I can imagine a situation, where 5th client is running KEYS command but we still need to keep the data for the first client, because they were still not trasfered through the network.
Let's test it out.
Scenario: Let's have Redis DB with 200M size (1000M physical memory) and check how much memory is one execution of KEYS takes, and how long when done through the network. Then simulate 5 same KEYS commands to be run and see if it kills Redis.
$ src/redis-cli info memory
used_memory_human:214.17M
total_system_memory_human:926.08M
When run from the same node:
$ time src/redis-cli keys "*" | wc -l
1450832
real 0m17.702s
user 0m8.278s
$ free -m
total used free shared buff/cache available
Mem: 926 301 236 24 388 542
Mem: 926 336 200 24 388 507
Mem: 926 368 168 24 388 475
Mem: 926 445 91 24 388 398
Mem: 926 480 52 24 393 363
Mem: 926 491 35 24 399 352
-> looks like it consumed 190M for the KEYS command
-> So, Redis is busy with the command for 8s, but the memory is consumed for this command for 17s.
-> running just one KEYS command just blocks the Redis for 8s, but does not cause the OOM
Let's run 2 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ free -m
total used free shared buff/cache available
Mem: 926 300 430 24 194 546
Mem: 926 370 361 24 194 477
Mem: 926 454 276 24 194 393
Mem: 926 589 141 24 194 258
Mem: 926 693 37 24 194 154
-> now we used 392M memory for 26s, while Redis is hung for 17s
-> but we still have a running Redis
Let's run 3 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ free -m
total used free shared buff/cache available
Mem: 926 299 474 23 152 549
Mem: 926 385 388 23 152 463
Mem: 926 512 261 23 152 336
Mem: 926 573 200 23 152 275
Mem: 926 711 61 23 152 136
Mem: 926 842 21 21 62 17
-> now we used 532M memory for 36s, while Redis is hung for 26s
-> but we still have a running Redis
Let's run 4 KEYS commands at the (almost) same time (that will run one after another anyway)
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
$ time src/redis-cli keys "*" | wc -l &
-> that kills Redis
Nothing in the Redis logs:
2251:C 19 May 16:03:05.355 * DB saved on disk
2251:C 19 May 16:03:05.379 * RDB: 2 MB of memory used by copy-on-write
1853:M 19 May 16:03:05.432 * Background saving terminated with success
In /var/log/messages
May 19 16:08:01 consumer2 kernel: [454881.744017] redis-cli invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
May 19 16:08:01 consumer2 kernel: [454881.744180] [<8023bdb8>] (oom_kill_process) from [<8023c6e8>] (out_of_memory+0x134/0x36c)
Conclusion:
we can kill healthy Redis instance, consuming 200M of RAM, where 70% RAM free on OS by just running 4 KEYS commands issued one after another and run one after another. Just because the results has to be buffered even after Redis is finished with executing them.
one is unable to protect Redis against that behavior with maxmemory, as the memory usage is not a result of SET command

Graphlab Create setup error: graphlab.get_dependencies() results in BadZipFile error

After installing Graphlab Create on Win 10, it asks us to install 2 dependencies using graphlab.get_dependencies().
However, I am getting the following error:
In [9]: gl.get_dependencies()
By running this function, you agree to the following licenses.
* libstdc++: https://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html
* xz: http://git.tukaani.org/?p=xz.git;a=blob;f=COPYING
Downloading xz.
Extracting xz.
---------------------------------------------------------------------------
BadZipfile Traceback (most recent call last)
in ()
----> 1 gl.get_dependencies()
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\site-packages\graphlab\dependencies.pyc in get_dependencies()
34 xzarchive_dir = tempfile.mkdtemp()
35 print('Extracting xz.')
---> 36 xzarchive = zipfile.ZipFile(xzarchive_file)
37 xzarchive.extractall(xzarchive_dir)
38 xz = os.path.join(xzarchive_dir, 'bin_x86-64', 'xz.exe')
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in __init__(self, file, mode, compression, allowZip64)
768 try:
769 if key == 'r':
--> 770 self._RealGetContents()
771 elif key == 'w':
772 # set the modified flag so central directory gets written
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in _RealGetContents(self)
809 raise BadZipfile("File is not a zip file")
810 if not endrec:
--> 811 raise BadZipfile, "File is not a zip file"
812 if self.debug > 1:
813 print endrec
BadZipfile: File is not a zip file
Anyone knows how to resolve?
If you get this error, a firewall might be blocking you from downloading a dependency. Here is some information and a work around:
Please see the SFrame source code for get_dependencies to see how GraphLab uses this package: https://github.com/turicode/SFrame/blob/master/oss_src/unity/python/sframe/dependencies.py
The xz utility is only used to extract runtime dependencies from the other file downloaded there (from repo.msys2.org): http://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-gcc-libs-5.1.0-1-any.pkg.tar.xz. Two DLLs from that file need to be extracted into the "cython" directory inside the GraphLab Create install path (typically something like lib/site-packages/python2.7/graphlab within a virtualenv or conda env). Once extracted the dependency issue should be resolved.
In graphlab folder make the folder writable.Initially it is only readable.Go to properties of folder undo the only read option.Hope it solve your problem.