I'm experimenting with BitTornado-0.3.17 to distribute a file to several machines (*nix). I ran into couple of problems while doing so. Here is what I have done so far.
Download BitTornado-0.3.17.tar.gz
from
http://download2.bittornado.com/download/BitTornado-0.3.17.tar.gz
and untar it.
Created a torrent file and started tracker following instructions in the README file.
Started a seeder
./btdownloadheadless.py ../BitTornado-0.3.17.tar.gz.torrent --saveas ../BitTornado-0.3.17.tar.gz
saving: BitTornado-0.3.17.tar.gz (0.2 MB)
percent done: 0.0
time left: Download Succeeded!
download to: /home/srikanth/BitTornado-0.3.17.tar.gz
download rate:
upload rate: 0.0 kB/s
share rating: 0.000 (0.0 MB up / 0.0 MB down)
seed status: 0 seen recently, plus 0.000 distributed copies
peer status: 0 seen now, 0.0% done at 0.0 kB/s
Now we have a seeder. I start a peer on another machine to download BitTornado-0.3.17.tar.gz.
./btdownloadheadless.py BitTornado-0.3.17.tar.gz.torrent
At this point I do not observer my peer to download data from seeder. However if I kill my seeder and start again, the peer immediately downloads from the seeder. Why is it happening this way? The first time seeder reports tracker, tracker should be aware of the seeder and share that information to newly joined peer. Its happening only when I start seeder after peer joins the network.
Has anyone used BitTornado to distribute files programmatically (not using GUI tools at all.) ?
Thanks :-)
EDIT: Here is what happened a few days later. I dig into tracker logs and figure that seeder is binding itself onto a private ip address interface and reporting it. It is causing other clients to not reach seeder. hence no download. So I passed --ip options to it, which made it to report the tracker the machine's public ip address to which it bound. Even then for some reason i couldn't get client to download from seeder. However I got it working by starting client first and seeder last. This worked for me consistently. I can't think of any reason why it shouldn't work the other way. So, I'm starting clients first and then starting seeder.
All the symptoms indicate only one of your machines is able to connect to the other (in this case, the "seeder" machine). Restarting the "seeder" means it announces to the tracker and gets the other peers info, then connects. If the downloader is unconnectable, it simply cannot do anything until the seeder sees its IP.
This may be also related to rerequest_interval in download_bt1.py or reannounce_interval in track.py. Setting them to smaller values may help you debug if the tracker receives and distributes the right information.
When I diff BitTornado with twitter murder code, I found a little different.
Especially the at line 75 of Downloader.py file:
self.backlog = max(50, int(self.backlog * 0.075))
this will fix the bug, download uncomplete.
Related
I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.
I am using Twitter4J to retrieve user timelines, but it stopped working. The number of accepted requests is fine, but I get a autentication problem, probably related to clock sync?
INFO: Error while querying Twitter: 401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
rateLimitStatus=RateLimitStatusJSONImpl{remaining=178, limit=180, resetTimeInSeconds=1432305852, secondsUntilReset=899}, version=3.0.5}
Not sure what to do then. ive tried already to sync my server with ntpdate ntp.ubuntu.com with no luck.
I think you are using SandBox(Build-in VM) of Cloudera/Hortonworks etc
I was also getting the same problem and was trying to sync my clock with 'time.windows.com' clock but I was failed to do. So I moved to 4 nodes cluster which was already existing in my case and there my clock was in sync and I could run my request to Twitter successfully.
Conclusion: Move from Cloudera/Hortonworks VM to own installed OS and make the clock sync.
Hope this help!!!
Without getting into much detailed code
I have an 'kiosk' application that is running in about 500-800 different 'kiosk' at about 50 locations. Very simple application that connects to internet via a Verizon MIFI (2-3 MIFI per location). We believe that Verizon has made some changes to the network and now randomly I get
The request failed with HTTP status 417: Expectation failed
I have viewed The request failed with HTTP status 417: Expectation Failed - Using Web Services
and FB Connect: (417) Expectation failed
But you see I already had used
System.Net.ServicePointManager.Expect100Continue = false
in my code.
So one of the issues I have is the application isn't easy to test, and it will fail for 20-30 minutes or several days, then clears itself up.
Changing the config to include
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
Would be a large task, I don't know it that would even fix it. Since it is random I'm having troubles because I typically can't get it to fail in my office at my desk more than 1 time.
I happen to use VB and .Net for the application and services that run with the 'kiosk'.
The issue seems to be with the config on the mifi and not the Verizon network itself. We recently switched APNs and when a mifi connects to the Verizon network it is supposed to update automatically. Sometimes the mifi will fail to update the APN setting and that is when we get this error message. There are two ways I have found to fix this issue. The first and easier is to log into the mifi and manually update the setting. If you are dealing with a user who is not tech savvy and walking them through logging into the mifi will not work you can call the Verizon wireless enterprise help desk and have them remove the feature set from the mifi, add the features back, and then pull the battery from the mifi and power cycle it, this will make the mifi request the configuration settings again.
I have packstack-allinone setup on my RHEL7.1 trial for Juno release.
I am facing problem while launching VM(for ex: cirros) with a disk size mentioned in flavor. If there is 0gb disk size then VM are getting launched but not for higher flavor sizes.
I also observe that when I do this, openstack-nova-compute service goes down which I observed when I checked using nova-manage service list with nova-compute being XXX making me restart the service everytime I try this scenario. The compute logs doesn't throw any error, it just gets stuck at "Creating image".
Is there any Filesystem issue which i missing to be configured? I am new to this, so please help.
PS: I run all commands with "root" user.
The problem was with esxi. Esxi needs to be 5.5v to support RHEL7x Since mine was 5.1v it only supported RHEL6x.
After upgrading esxi5.1 to 5.5v it worked fine.
We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change.
I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing:
[#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
...
[#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at
com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031)
Using lsof I check the number of file descriptors and I see quite a few entries which look like:
java 18510 root 8556u sock 0,4 1555182 can't identify protocol
java 18510 root 8557u sock 0,4 1555320 can't identify protocol
java 18510 root 8558u sock 0,4 1555736 can't identify protocol
java 18510 root 8559u sock 0,4 1555883 can't identify protocol
If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are.
I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ.
What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look?
thanks,
chuck
I found this solution;
assuming GlassFish runs as user
"glassfish"
Add the following lines to
/etc/security/limits.conf to increase
the maximum number of open files for
the user that Glassfish runs as:
glassfish soft nofile 32768
glassfish hard nofile 65536