what are the different testings like DST, Wanem......? - testing

can anybody explain the what are the different testings like DST,Wanem.

the WANem enables you to do performance tests like simulating different connection speeds. For example you can simulate VPN, 3G, ISDN, DSL speeds to see if your application is still reacting in a timely fashion.
WANem is free, based on linux, and acts like a router.
there are several other testing tools out there (software and hardware based).

I think DST and Wanem both are different set of words.
DST stands for "DayLight Saving Time" which is related to Day and Time, Many countries use DST to make better use of the daylight in the evenings. you can find more information over here http://www.timeanddate.com/time/dst/
Wanem stands for "Wide Area Network Emulator", which is related to internet or remote network, for information for this find over here http://sourceforge.net/projects/wanem/files/Documents/wanemulator_all_about_v2.0.pdf/download?use_mirror=kaz

Related

Connect a microcontroller to the internet and download data from API

Let me start by saying, I am a complete newbie on microcontrollers. So please help!
I want to use a microcontroller with a stored memory of timestamps for one year. The reason being that I want to write a simple conditional which will trigger an output depending on these times of the day (e.g. today if time == X, set output = 1)
My question is, how can I get the timestamp data into the microcontroller? It is actually downloadable via an API - can I do an API call and download the information through the microcontroller, or is there another way to store the data into its memory?
A "microcontroller" is not a complete system and they are not all the same. It could be a lowly 8-bit 8051 running bare-metal code, or it could be a 32 bit chip capable of running Linux. There is a lot of additional hardware and software between a "microcontroller" and The Internet.
From a software point of view (and that is the scope in which the question is valid on StackOverflow), you need at least a TCP/IP stack and drivers for the network interface (Ethernet most commonly). How you store the data is entirely within your design; your system may have a filesystem, or it may just have a small amount of EEPROM, or you might store it in on-chip flash memory for example. You have to tailor your software solution to the hardware resources available on your system (and your system is not just the microcontroller).
Given a TCP/IP stack the "API" will be whatever that stack provides - it may be a complete BSD socket API or something more lightweight. It may or may not provide application layer protocols such as FTP, Telnet or SSH. For this simple application a proprietary application protocol would probably suffice allowing you to work at the TCP/IP socket level.
Another thing to consider is where time comes from. Will the system have an RTC (requiring an RTC crystal and battery), or will it get time via the Internet connection, GPS or other source?
Answer to your question depends on your design requirements and constrains:
what microcontroller do you want to use, and how much memory will it have available?
can it connect to the internet? Is internet connection available all the time?
how does it know what time it is?
do timestamps change over time? E.g. once downloaded can timestamps list become obsolete?
There are many possible approaches: you can download data manually and write the to SD card, or internal memory of microcontroller (if dataset is small). Or you can program microcontroller to download data using API. Just keep in mind its memory limitations. Many units have only 1-2kB of RAM, so downloading all data at once and storing it in RAM can become a problem.

is it possible to synchronize two computers at better than 1 ms accuracy using any internet protocols?

Say I have two computers - one located in Los Angeles and another located in Boston. Are there any known protocols or linux commands that could synchronize both of those computer clocks to better than 1 ms and NOT use GPS at all? I am under the impression the answer is no (What's the best way to synchronize times to millisecond accuracy AND precision between machines?)
Alternatively, are there any standard protocols (like NTP) in which the relative time difference could be known accurately between these two computers even if the absolutely time synchronization is off?
I am just wondering if there are any free or inexpensive ways to get better than 1 ms time accuracy without having to resort to GPS.
I don't know of any known protocol (perhaps there is) but can offer a method similar to the way scientists measure speed near that of light:
Have a process on both servers "pinging" the other server and waiting for a response, and timing the time it took for a response. Then start pinging periodically exactly when you expect the last ping to come in. Averaging (and discarding any far off samples) you will have the two servers after a while "thumping away" at the same rhythm. The two can also tell how much time between each "beat" is taking for each of them at a very high accuracy, by dividing the count of beats in the (long) period of time.
After the "rhythm" is established, if you know that one of the server's time is correct, or you want to use its time as the base, then you know what time it is when your server's signal reaches the other server. Together with the response it sends you what time IT has. You can then use that time to establish synchronization with your time system.
Last but not least, most operating systems give the non-kernel user the ability to act only in at least 32 milliseconds of accuracy, that is: you cannot expect something to happen exactly within less milliseconds than that. The only way to overcome that is to have a "native" DLL that can react and run with the clock. That too will give you only a certain speed of reaction, depending on the system (hardware and software).
Read about Real-Time systems and the "server" you are talking about (Windows? Linux? Embedded software on a microchip? Something else?)

Is it possible to change the guest wall clock speed in a virtualized environment?

We're undertaking a large project that is focused on delivering automated testing of the software that we produce.
We have a lot of "events" that trigger certain behavior at specific times. Ideally, we would be able to exercise these tests in an automated fashion without the need to move the system clock in intervals to specific points in time.
To that end, I'm wondering if there is a way (with VMWare, or any other virtualization software) to increase the speed of the system clock of the guest operating system. I'm not interested in measuring performance in these tests, only functionality.
Is there anything out there that would allow for this behavior?
It works for VirtualBox:
VBoxManage setextradata "VM name" "VBoxInternal/TM/WarpDrivePercentage" x
where x is the percentage you want (for instance, 200 is doubling, 50 is halving)
You can also more information here, on the section "Accelerate or slow down the guest clock". Regards.
I was able to work around this using the Win32 API SetSystemTimeAdjustment()
This allows you to increase the amount of time added to the system clock for each OS tick interval. It's meant generally for addressing clock skew, but can be used outside of that particular context.
I don't see what the benefits are of testing this in a fast-forwarding VM instead of unit testing the event trigger using a mock implementation of the date/time dependency.
The only thing you "gain" by testing this in a fast-forwarding VM is that you test both the system's and the programming language's date/time implementation, which I think you are save to trust because it is used, developed and tested by so many for such a long time.

Permanent DOS Attacks - Anyone Knowledgeable?

So, I'm looking into Permanent DOS attacks for a class, and I'm having a hard time coming up with concrete examples. There's a lot of information about Phlashing (flashing firmware to either brick the device, or put malicious firmware in its place, for those of you who don't know the term) but I'd like to have a broader set of examples.
That being said, there has to be a way to write code that will do something like wear out disk arms, right? Something that will have the disk seek to the end of the disk, then back to the front, on and on. Anyone have an example of how that would be accomplished? Is there some way to specify where to track to on a disk in C (similar to traversing to a certain point in a file, but for the entire HDD!)? If not, I guess there's always trying to force a file's location on the disk... which seems like less fun trying to accomplish. Again, can you do something like that programmatically?
If anyone has any insight into these types of attacks, or any good resources for me to check into, I'd appreciate it. Maybe you read a story about it on Slashdot a few years back? Let me know! The more info I can gather, the less likely I'll be forced to kill time during my talk by bricking my router in the class :) I'm not made of money OR routers!
Seems like these would primarily be limited to physical attacks and social engineering ("To enable your computer's hidden turbo function, remove the cover and pry this part). But:
Adjust screen refresh rates to insane values to blow older CRTs
Monkey with ACPI fan, charge, or battery controls if possible to cause overheating or battery failure.
Overwrite every rewritable storage device of every kind attached to any bus. Discover and overwrite any IDE, USB, etc... device you know the flash updater details for.
Of course nothing is permanent. You can replace the hard drive, BIOS chips, CPU, motherboard, memory, etc...
Although it is mostly fictional, the halt and catch fire operation would be a very convenient and permanent DOS attack.
Steve Gibson (google his name) has a paper he wrote a few years back about protocol-level vulnerabilities in TCP/IP. Some of it is still pertinent today.
Socially engineer the power company or ISP to turn off service at the location in question.
Many devices in the computer today have their own firmwares, including but not limited to CPU, DVD, HDD, VGA, motherboard (BIOS) etc. Most of these devices also have a way of updating their respective firmwares. Which can also be used to brick them pretty efficiently. Although this does require an individual approach to every device, often using privileged instructions and undocumented interfaces.
It's possible for a virus to do this. I seem to recall an actual virus doing this back in the day, but can't find anything to back that up.
I was able to find an article where the author has a conversation with the VP from Western Digital wherein he states a program could potentially access a hard drive's firmware causing such a DOS attack:
There are back doors if you will that allow us to get into places that the operating system can't go through the IDE connector
There used to be a few viruses that could cause old CRT monitors to break. They could cause invalid sync signals out the VGA point that would be too high in frequency for the video sweep. I also remember a few that would use bad sector flagging to draw images on the old versions of Scandisk (we are talking early 90’s or older.) I don't remember and of the names or have any references, but they used to be quite annoying.
Fortunately better circuits, memory protection, API abstraction have made such attacked very difficult to impossible.

Where do you draw the line between what is "embedded" and what is not?

ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.