How to reprogram a cheap GPS tracker? - gps

I was wondering if I could get a cheap GPS tracking device such as this one on amazon and reprogram it to send the co-ordinates to my own server? I would then like to generate reports from the DB on my server based on dates etc. I would like to build this for a very small-scale courier company I am planning on starting.
I am an amateur/hobbyist programmer and am looking for a few pointers to help me get on the right track. Pun totally intended.

This is a very broad question. But since no one answered so far, I will just throw in my two cents. First of all, you need to know what GPS signal the (cheap) receiver can track. If it can track only single frequency at L1, there's not much you can do. You have to live with large ionosphere fluctuation error signal.
http://www.navipedia.net/index.php/Ionospheric_Delay
If it can only track code signal (not carrier phase signal) you cannot do code smoothing to reduced the noise level of code signal received.
In other words, if a GPS receiver hardware is limited, there's not much room for improvement.

Related

Ultrasonic sensor which can access raw echo signal

I want to conduct an experiment which needs cheap Waterproof Ultrasonic sensor with the ability of accessing raw echo signal. However, perhaps I can't find such product in the market.
Currently, the sensor like JSN-SR04T is very popular, but it can only access the transmitting time of echo. The raw signal has been processed and I CAN'T get it.
Seems most companies just want to provide sensor with ranging ability not others, but my project highly rely on getting the raw data.
For your information, I know that Sonar may match my expectation, and I have already bought one. But the price is too expensive and I don't need such good one.
So I wonder if you guys know some sensor can achieve it? The worst case I have to hack the current circuit, it will cost too much unmeaningful time.
Edit:
Some company do provide solution, which is just have an analog output of the circuit. And I will update the performance later.

Can I use sensor-fusion for multiple GPS receivers and better my position estimation?

I am wondering if it makes sense to fuse multiple GPS signals to improve my estimated result. This works fine for example for accelartion sensors, but this sensors have a white gaussian noise.
GPS sensors being mounted on the same board probably suffer from the same errors like drift or multi-path effects, which cannot be corrected by only fuse the sensor readings of this sensors. I imagine that like a constant offset in the same direction, which won t be correct just stays nearly the same.
Furthermore, I have diffrent sensor which I can mount on my drone, even RKT sensor. In my opinion, it makes no sense to fuse a d-GPS with readings from an RKT GPS.
Please correct my if I am wrong.
Thank you in advance and I hope this forum is the right spot to ask that question.
yes you can. Use EKF based approach with onboard multi GPS and multi IMU
The DJi is doing it, But it is can only prevent one of sensor failure, not the systematic drift patter. To avoid that, you need some more source such as visual odometry or lidar odometry to fuse in the EKF. GPS sate count is good meaure of how bad the position is. It ranges from 0 to 15. So when every one is 15, trust GPS more less variance. When everyone is lower than 6 add very high variance to GPS source.
Yes RTK might be better when you have direct line of sight. But once out of sight, then other GPS might be better. So totaly depends on your use case

Send and receive data trough the power network

I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it.
Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field?
Edit:
There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.
This is not possible, as described in the question. Yes, with extra hardware you can do it. No, with the standard hardware in a PC, you could not.
As others have noted, among other problems, the only information you can get from a generic PC is a bit of voltage info for the CPU. It's not going to give a picture of the AC signal, nor any signal modulated on top of it. You'll be watching a few highly regulated DC signals deep inside the computer, probably converted at a relatively low rate too. Almost by definition, if you could see external information on any of those signals, your machine is already suffering a hardware failure and chances are the CPU will be crashing soon...
*blink* No...
Edit: I mean, there's the possibility to use the powerlines as network cables, but only with special adapters. And it is just designed for home networks.
Edit2: You can't read something from the power supply of a computer...it's not designed for that. You would have to create your own component/adapter for this.
Am I mis-reading this? Wouldnt this be a pure hardware solution?
This is highly improbable without adding some hardware.
You see, the power supplies in a regular PC are switching power supplies which effectively decouple the AC input from the supplied DC voltage needed on the PC side. The AC side just basically provides power that fuels the high-speed power switching circuitry.
Also, a DC signal, by definition, doesn't provide a signal per se: it is a "static" power level (and yes the power level does vary a bit in the time domain but not as an easy to leverage function).
Yes there can be an AD (Analog to Digital) monitoring chip that can be used on the PC side to read the voltage of the DC component supplied to the motherboard etc., but that doesn't mean there is still a signal that can be harvested: the original power line "signal" might have been through enough filters that there isn't a "signal" left to be processed.
Lastly, one needs to consider that power supplies design varies from company to company; this fact will undoubtedly affect any possible design of a communication solution.
what you describe is possible but unfortunately, you need an adapter to convert the signal running on the powerlines to sensible network traffic.
the power line acts as a physical medium, thus is at the lowest level f the OSI stack. conversion from electrical signal to sensible network traffic requires a hardware adapter, same for your an ethernet adapter. your computer is unable to understand this traffic since its power supply was not build to transmit those informations. but note that you can easily find an adapter and it will works the same as an ethernet adapter, that is be accessible through the standard BSD socket library.
This is ENTIRELY possible, although you would need to either buy or build some hardware to make it happen. In addition, the software solution would be very, very complex.
The computer's power supply would be out of the picture for the most part. You need to read data straight from the wall with as little extraneous noise as possible. From the electrical engineering perspective, this is a very thoroughly covered topic. In the end, all you're really doing is an analog to digital conversion, and the rest keeps your circuit from being fried.
The software solution would basically be eliminating random noise, and looking for embedded signals. The math behind analog signal analysis is very complex, and you can spend a few semesters in college covering the topic, and the rest of your career trying to master it. If you're good at it, there's a cushy job for you on wallstreet predicting the stock market.
And that only covers reading incoming signals. Transmitting is a whole 'nother sport.
Now, it also sounds like you might be interested in a hack. That is...
You could buy a
commercial-off-the-shelf power-line
Ethernet adapter and tear it apart.
They have two prongs that plug into
a standard wall outlet. You could
remove these and wire them to the
INSIDE of a power supply.
To do that, you'd have to tear apart a power
supply as well, which is incredibly
dangerous and I hereby warn you and
anyone else to NEVER attempt this.
The entire Ethernet adapter could be
tucked into the power supply and you
could basically have an Ethernet
port on the surface of your power
supply (either inside or outside the
computer).
Simply wire that to a
standard Ethernet adapter and voila
(!), you have nothing but a power
cable connecting your computer to
the wall outlet, AND you magically have
Ethernet!
Note that there also has to be another power-line
Ethernet adapter somewhere else for
you to establish a network and make the whole project useful.
How can you read modulated data from the power supply, you are talking about voltage and ohms and apart from a possible electrical shock which would be just shocking :) There are specialized electrical plugs with ethernet jacks in them that you can use.
I just hazard a guess that this is totally transparent as per Adrien Plisson's answer, i.e. you would have all of the OSI layer and is no different. You can write code to read from the sockets.
AFAIK no company that produces this electrical plug would ever open up the API for competition reasons, it is still in early stages as adoption of that is low because obviously it is very expensive (120 euro here in my country for a pair of 'em), as it does not deliver the quoted speed, say 100Mbps power plug, may get maybe 85Mbps due to varying situations and phenomena with power (think surges, brown outs, interference).
My 2cents.
Hope this helps,
Best regards,
Tom.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.

How does GPS in a mobile phone work exactly? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I assume it doesn't connect to anything (other than the satelite I guess), is this right? Or it does and has some kind of charge?
GPS, the Global Positioning System run by the United States Military, is free for civilian use, though the reality is that we're paying for it with tax dollars.
However, GPS on cell phones is a bit more murky. In general, it won't cost you anything to turn on the GPS in your cell phone, but when you get a location it usually involves the cell phone company in order to get it quickly with little signal, as well as get a location when the satellites aren't visible (since the gov't requires a fix even if the satellites aren't visible for emergency 911 purposes). It uses up some cellular bandwidth. This also means that for phones without a regular GPS receiver, you cannot use the GPS at all if you don't have cell phone service.
For this reason most cell phone companies have the GPS in the phone turned off except for emergency calls and for services they sell you (such as directions).
This particular kind of GPS is called assisted GPS (AGPS), and there are several levels of assistance used.
GPS
A normal GPS receiver listens to a particular frequency for radio signals. Satellites send time coded messages at this frequency. Each satellite has an atomic clock, and sends the current exact time as well.
The GPS receiver figures out which satellites it can hear, and then starts gathering those messages. The messages include time, current satellite positions, and a few other bits of information. The message stream is slow - this is to save power, and also because all the satellites transmit on the same frequency and they're easier to pick out if they go slow. Because of this, and the amount of information needed to operate well, it can take 30-60 seconds to get a location on a regular GPS.
When it knows the position and time code of at least 3 satellites, a GPS receiver can assume it's on the earth's surface and get a good reading. 4 satellites are needed if you aren't on the ground and you want altitude as well.
AGPS
As you saw above, it can take a long time to get a position fix with a normal GPS. There are ways to speed this up, but unless you're carrying an atomic clock with you all the time, or leave the GPS on all the time, then there's always going to be a delay of between 5-60 seconds before you get a location.
In order to save cost, most cell phones share the GPS receiver components with the cellular components, and you can't get a fix and talk at the same time. People don't like that (especially when there's an emergency) so the lowest form of GPS does the following:
Get some information from the cell phone company to feed to the GPS receiver - some of this is gross positioning information based on what cellular towers can 'hear' your phone, so by this time they already phone your location to within a city block or so.
Switch from cellular to GPS receiver for 0.1 second (or some small, practically unoticable period of time) and collect the raw GPS data (no processing on the phone).
Switch back to the phone mode, and send the raw data to the phone company
The phone company processes that data (acts as an offline GPS receiver) and send the location back to your phone.
This saves a lot of money on the phone design, but it has a heavy load on cellular bandwidth, and with a lot of requests coming it requires a lot of fast servers. Still, overall it can be cheaper and faster to implement. They are reluctant, however, to release GPS based features on these phones due to this load - so you won't see turn by turn navigation here.
More recent designs include a full GPS chip. They still get data from the phone company - such as current location based on tower positioning, and current satellite locations - this provides sub 1 second fix times. This information is only needed once, and the GPS can keep track of everything after that with very little power. If the cellular network is unavailable, then they can still get a fix after awhile. If the GPS satellites aren't visible to the receiver, then they can still get a rough fix from the cellular towers.
But to completely answer your question - it's as free as the phone company lets it be, and so far they do not charge for it at all. I doubt that's going to change in the future. In the higher end phones with a full GPS receiver you may even be able to load your own software and access it, such as with mologogo on a motorola iDen phone - the J2ME development kit is free, and the phone is only $40 (prepaid phone with $5 credit). Unlimited internet is about $10 a month, so for $40 to start and $10 a month you can get an internet tracking system. (Prices circa August 2008)
It's only going to get cheaper and more full featured from here on out...
Re: Google maps and such
Yes, Google maps and all other cell phone mapping systems require a data connection of some sort at varying times during usage. When you move far enough in one direction, for instance, it'll request new tiles from its server. Your average phone doesn't have enough storage to hold a map of the US, nor the processor power to render it nicely. iPhone would be able to if you wanted to use the storage space up with maps, but given that most iPhones have a full time unlimited data plan most users would rather use that space for other things.
There's 3 satellites at least that you must be able to receive from of the 24-32 out there, and they each broadcast a time from a synchronized atomic clock. The differences in those times that you receive at any one time tell you how long the broadcast took to reach you, and thus where you are in relation to the satellites. So, it sort of reads from something, but it doesn't connect to that thing. Note that this doesn't tell you your orientation, many GPSes fake that (and speed) by interpolating data points.
If you don't count the cost of the receiver, it's a free service. Apparently there's higher resolution services out there that are restricted to military use. Those are likely a fixed cost for a license to decrypt the signals along with a confidentiality agreement.
Now your device may support GPS tracking, in which case it might communicate, say via GPRS, to a database which will store the location the device has found itself to be at, so that multiple devices may be tracked. That would require some kind of connection.
Maps are either stored on the device or received over a connection. Navigation is computed based on those maps' databases. These likely are a licensed item with a cost associated, though if you use a service like Google Maps they have the license with NAVTEQ and others.