I am looking for the way to mount NTFS hard disk on FreeBSD 6.2 in read/write mode.
searching google, I found that NTFS-3G can be a help.
Using NTFS-3G, there is no problem when I try to mount/unmount NTFS manually:
mount: ntfs-3g /dev/ad1s1 /home/admin/data -o uid=1002,
or
umount: umount /home/admin/data
But I have a problem when try to mount ntfs hard disk automatically at boot time.
I have tried:
adding fstab: /dev/ad1s1 /home/admin/data ntfs-3g uid=1002 0 0
make a script, that automatically mount ntfs partition at start up, on /usr/local/etc/rc.d/ directory.
But it is still failed.
The script works well when it is executed manually.
Does anyone know an alternative method/ solution to have read/write access NTFS on FreeBSD 6.2?
Thanks.
What level was your script running at? Was it a S99, or lower?
It sounds like either there is a dependency that isn't loaded at the time you mount, or that the user who is trying to mount using the script isn't able to succeed.
In your script I suggest adding a sudo to make sure that the mount is being performed by root:
/sbin/sudo /sbin/mount ntfs-3g /dev/ad1s1 /home/admin/data -o uid=1002, etc
Swap the sbin for wherever the binaries are.
After some ways I tried before.
The last, I tried to add ntfs-3g support by change the mount script on mount.c
Like this:
use_mountprog(const char *vfstype)
{
/* XXX: We need to get away from implementing external mount
* programs for every filesystem, and move towards having
* each filesystem properly implement the nmount() system call.
*/
unsigned int i;
const char *fs[] = {
"cd9660", "mfs", "msdosfs", "nfs", "nfs4", "ntfs",
"nwfs", "nullfs", "portalfs", "smbfs", "udf", "unionfs",
"ntfs-3g"
NULL
};
for (i = 0; fs[i] != NULL; ++i) {
if (strcmp(vfstype, fs[i]) == 0)
return (1);
}
return (0);
}
Recompile the mount program, and it works!
Thanks...
Related
When working on compiled documents (LaTeX, RMarkdown, etc), I usually set up a rule for make using inotifywait to watch the input files and automatically rebuild the output file whenever the input files change.
For example:
dependencies = main.tex
main.pdf: $(dependencies)
latexmk -lualatex --shell-escape $<
watch:
while true; do inotifywait --event modify $(dependencies); $(MAKE); done
I'm now trying to migrate from make to snakemake. How can I set up something similar with snakemake?
Using Snakemake you get the power of Python. For example, you can use inotify python module to wait for updates and run the snakemake.snakemake function each time you detect updates. But it would be much easier to reuse the bash script that you have already: while true; do inotifywait --event modify $(dependencies); snakemake; done.
I installed the ipfs version 0.8.0 on WSL Ubuntu 18.04. Started ipfs using sudo ipfs daemon. Added 2 directories using the command sudo ipfs add -r /home/user/ipfstest, it results like this:
added QmfYH2KVxANPA3um1W5MYWA6zR4Awv8VscaWyhhQBVj65L ipfstest/abc.sh
added QmTXny9ZjuFPm4C4KbQSEYxvUp2MYbSCLppPQirW7ap4Go ipfstest
Likewise, I added one more directory having 2 files. Now, I need the total files and directories in my ipfs using go-ipfs-api. Following is my code:
package main
import (
"fmt"
"context"
"os"
"net/http"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/ipfs/go-ipfs-api"
)
var sh *shell.Shell
func main() {
sh := shell.NewShell("localhost:5001")
dir,err:=sh.FilesLs(context.TODO(),"")
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s", err)
os.Exit(1)
}
fmt.Printf("Dir are: %d", dir)
pins,err:=sh.Pins()
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s", err)
os.Exit(1)
}
fmt.Printf("Pins are: %d", len(pins))
dqfs_pincount.Add(float64(len(pins)))
prometheus.MustRegister(dqfs_pincount)
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8090", nil)
}
If I run this code, I get the output as:
Dir are: [824634392256] Pins are: 18
Pinned files are incremented as I added files. But what is this output [824634392256]? And why only one?
I tried giving a path to the function dir,err:=sh.FilesLs(context.TODO(),"/.ipfs"). As I guess the files and dir's must be stored in ~/.ipfs. But this gives an error:
error: files/ls: file does not exist
How can I get all directories of ipfs? Where I am mistaken? what path should I prove as a parameter? Please help and guide.
There's a bit to unpack here.
Why are you using sudo?
IPFS is meant to be run as a regular user. Generally you don't want to run it as root, but you'd instead run the same commands, just without sudo:
ipfs daemon
ipfs add -r /home/user/ipfstest
...
Code doesn't compile
Let's begin with the code, and make sure that's working as intended before moving forward, first off your import:
"github.com/ipfs/go-ipfs-api"
Should read:
shell "github.com/ipfs/go-ipfs-api"
As otherwise the code won't compile, because of your usage of shell later in the code.
Why does dir produce the output it does?
Next, let's look at your usage of dir. You're storing *[]MfsLsEntry (MfsLsEntry), which is a slice of pointers. You're outputting that with string formatting %d, which will be a base10 integer (docs), so the "824634392256" is just the memory address of the MfsLsEntry object in the first index of the slice.
Why does sh.FilesLs(context.TODO(),"/.ipfs") fail?
Well FilesLs isn't querying your own regular filesystem that your OS runs on, but actually MFS. MFS is stored locally though, but using the add API doesn't automatically add something to your MFS. You can use FilesCp to add a CID to your MFS after you add it though.
How do I list my directories on IPFS?
This is a bit of a tricky question. The only data really retain on IPFS is either data pinned, or data referenced in the MFS. So above we already learned the FilesLs command lists the files/directories on your MFS. To list your recursive pins (directories), it's quite simple using the command line:
ipfs pin ls -t recursive
For the API though, you'll first want to call something like Shell.Pins(), filter out for the pins you want (maybe a quick scan through, pull out anything recursive), then query the CIDs using Shell.ObjectStat or whatever you prefer.
If working with the pins though, do remember that it won't feel quite like a regular mutable filesystem, because it isn't. It's much easier to navigate through CIDs added to MFS. So that's how I'd recommend you list your directories on IPFS.
I am attached to a running proces using dbx on AIX. There is a bug in the program, the offset in the opcode below is 0x9b8, but should be 0xbe8:
(dbx) listi 0x100001b14
0x100001b14 (..........+0x34) e88109b8 ld r4,0x9b8(r1)
I am able to fix that using the command below:
(dbx) assign 0x100001b14 = 0xe8810be8
but that affects only the running process and its memory. How can I change the on disk binary? I am not able to locate the pattern e88109b8 in the binary file,
otherwise I would use e.g. dd utility to patch it.
Best regards,
Pavel Filipensky
I want to cache hundreds of images while the splash-screen is displayed.
Following the guide from : https://docs.expo.io/versions/latest/sdk/app-loading#__next
I dont want to type one by one like :
async _cacheResourcesAsync() {
const images = [
require('./assets/images/image1.png'),
require('./assets/images/image2.png'),
require('./assets/images/image3.png'),
require('./assets/images/image4.png'),
require('./assets/images/image5.png'),
require('./assets/images/image6.png'),
...
];
const cacheImages = images.map((image) => {
return Asset.fromModule(image).downloadAsync();
});
return Promise.all(cacheImages)
}
I'v noticed that I cannot do something like (./assets/images/*) :
async _cacheResourcesAsync() {
const images = [
require('./assets/images/*')
...
];
}
Is there a way to reference the full folder on _cacheResourcesAsync() ?
What you're looking for is called dynamic imports (eg: in webpack), and it's not available in React Native. Here is a description of how you can do that in React Native, both through automation and through a Babel plugin. Given that individual-maintainer Babel plugins don't seem to survive very well between even minor upgrades of Babel, I'd highly suggest the automation approach, akin to what is laid out in the article linked to above.
For my part, though, I'd probably do it in bash and tie it into my build process (through package.json's "scripts"). It'd be something like this:
#!/bin/bash -exu
# The paths are relative to the script's parent directory.
DIRS_TO_BUILD=( ./src/res ./images ./assets );
cd `dirname $0`
HEREDIR=`pwd`
for DIR in "${DIRS_TO_BUILD[#]}"
do
cd "$HEREDIR/$DIR"
rm index.js
FILES=`ls -1QBb | grep -E '.js($|x|on)' | sort -u`
for FILE in $FILES
do
if [ -f "$FILE" ]
then
BASENAME=`basename "$FILE" .js`
BASENAME=`basename "$BASENAME" .jsx`
BASENAME=`basename "$BASENAME" .json`
echo "export const $BASENAME = require(\"./$FILE\");" >> index.js
fi
done
done
Resist the urge to call this script like import * as MyAssets from "./assets", because that'll kill tree shaking when it arrives.
You can modify that script to instead/also generate a call to loadAsync in order to prefetch all the assets. Such a modification is left as an exercise for the reader.
Hack instead for cache
A simple and effective way would be to bundle your assets in the app.JSON file. So when you will make a build those images will also come in the build and the app wouldn't have to search for those images in amazon CDN. This will increase your file size for sure but will make the assets offline and reloading faster.
I need to create folder if it does not exists, so I use:
bool mkdir_if_not_exist(const char *dir)
{
bool ret = false;
if (dir) {
// first check if folder exists
struct stat folder_info;
if (stat(dir, &folder_info) != 0) {
if (errno == ENOENT) { // create folder
if (mkdir(dir, S_IRWXU | S_IXGRP | S_IRGRP | S_IROTH | S_IXOTH) ?!= 0) // 755
perror("mkdir");
else
ret = true;
} else
perror("stat");
} else
ret = true; ?// dir exists
}
return ret;
}
The folder is created only during first run of program - after that it is just a check.
There is a suggestion to skipping the stat call and call mkdir and check errno against EEXIST.
Does it give real benefits?
More important, with the stat + mkdir approach, there is a race condition: in between the stat and the mkdir another program could do the mkdir, so your mkdir could still fail with EEXIST.
There's a slight benefit. Look up 'LBYL vs EAFP' or 'Look Before You Leap' vs 'Easier to Ask Forgiveness than Permission'.
The slight benefit is that the stat() system call has to parse the directory name and get to the inode - or the missing inode in this case - and then mkdir() has to do the same. Granted, the data needed by mkdir() is already in the kernel buffer pool, but it still involves two traversals of the path specified instead of one. So, in this case, it is slightly more efficient to use EAFP than to use LBYL as you do.
However, whether that is really a measurable effect in the average program is highly debatable. If you are doing nothing but create directories all over the place, then you might detect a benefit. But it is definitely a small effect, essentially unmeasurable, if you create a single directory at the start of a program.
You might need to deal with the case where strcmp(dir, "/some/where/or/another") == 0 but although "/some/where" exists, neither "/some/where/or" nor (of necessity) "/some/where/or/another" exist. Your current code does not handle missing directories in the middle of the path. It just reports the ENOENT that mkdir() would report. Your code that looks does not check that dir actually is a directory, either - it just assumes that if it exists, it is a directory. Handling these variations properly is trickier.
Similar to Race condition with stat and mkdir in sequence, your solution is incorrect not only due to the race condition (as already pointed out by the other answers over here), but also because you never check whether the existing file is a directory or not.
When re-implementing functionality that's already widely available in existing command-line tools in UNIX, it always helps to see how it was implemented in those tools in the first place.
For example, take a look at how mkdir(1) -p option is implemented across the BSDs (bin/mkdir/mkdir.c#mkpath in OpenBSD and NetBSD), all of which, on mkdir(2)'s error, appear to immediately call stat(2) to subsequently run the S_ISDIR macro to ensure that the existing file is a directory, and not just any other type of a file.