Wikipedia says:
Also referred to as WPA-PSK (pre-shared key) mode, this is designed for home and small office networks and doesn't require an authentication server.[9] Each wireless network device encrypts the network traffic using a 256 bit key. This key may be entered either as a string of 64 hexadecimal digits, or as a passphrase of 8 to 63 printable ASCII characters.
Is it true that every human in the world uses ASCII printable characters for their WPA passwords? What about China, Arabic countries, Japan and many many other?
Latin alphabet uses only small fraction of world's population. Are people forced to set up ASCII printable character for their WiFi password on the entire world?
Yes, it is correct.
We can also use the alphabet as well as their native language characters.
It is necessary other than WiFi password.
For example, even a URL or e-mail address is composed of the alphabet.
However, please do not worry.
This is easy thing.
Alphabet has been printed on the keyboard, in any country.
Related
this may be a stupid question but it's been confusing me.
I've been watching some videos on Embedded Systems and they're talking about parallel ports, the data, the direction and the amount used.
I understand that the ports are connected to wires which feed other parts of the system or external devices. But I am confused because the lecture I watched says that to control a single LED would require 1 bit from 1 port.
My question is, what does the parallel port on an embedded system look like and how would you connect your own devices to the board? (say you made a device which sent 4 random bits to the port)
EDIT: I have just started learning so I might have missed a vital piece of information which would tie this altogether. I just don't understand how you can have an 8 bit port and only use 1 bit of it.
Firstly, you should know that the term "parallel port" can refer to a wide variety of connectors. People usually use the phrase to describe 25-pin connectors found on older PCs for peripherals like printers or modems, but they can have more or fewer pins than that. The Wikipedia article on them has some examples.
The LED example means that if you have an 8-bit parallel port, it will have 8 pins, so you would only need to connect one of the pins to an LED to be able to control it. The other pins don't disappear or anything strange, they can just be left unconnected. The rest of the pins will be either ones or zeros as well, but it doesn't matter because they're not connected. Writing a "1" or "0" to that one connected pin will drive the voltage high or low, which will turn the LED on or off, depending on how it's connected. You can write whatever you want to the other pins, and it won't affect the operation of the LED (though it would be safest to connect them to ground and write "0"s to them).
Here's an example:
// assume REG is a memory-mapped register that controls an 8-bit output
// port. The port is connected to an 8-pin parallel connector. Pin 0 is
// connected to an LED that will be turned on when a "1" is written to
// Bit 0 (the least-significant bit) of REG
REG = 0x01 // write a "1" to bit 0, "0"s to everything else
I think your confusion stems from the phrase "we only need one bit", and I think it's a justified confusion. What they mean is that we only need to control that one bit on the port that corresponds to our LED to be able to manipulate the LED, but in reality, you can't write just one bit at a time, so it's a bit (ha!) misleading. You (probably) won't find registers smaller than 8-bits anymore, so you do have to read/write the registers in whole bytes at a time, but you can mask off the bits you don't care about, or do read-modify-write cycles to avoid changing bits you don't intend to.
Without the context of a verbatim transcript of the videos in question, it is probably not possible to be precise about what they may have specifically referred to.
The term "parallel port" historically commonly refers to ports primarily intended for printer connections on a PC, conforming to the IEEE 1284 standard; the term distinguishing it from the "serial port" also used in some cases for printer connections but for two-way data communications in general. More generally however it can refer to any port carrying multiple simultaneous data bits on multiple conductors. In this sense that includes SDIO, SCSI, IDE, GPIB to name but a few, and even the processor's memory data bus is an example of a parallel port.
Most likely in the context of embedded systems in general, it may refer to a word addressed GPIO port, although it is not a particularly useful or precise term. Typically on microcontrollers GPIO (general purpose I/O) ports are word addressable (typically 8, 16, or 32 bits wide), all bits of a single GPIO port may be written simultaneously (or in parallel), with all bit edges synchronised so their states are set simultaneously.
Now in the case where you only want to access a single bit of a GPIO (to control an LED for example), some GPIO blocks allow single but access by having separate set/clear registers, while others require read-modify-write semantics of the entire port. ARM Cortex-M supports "bit-banding" which is an alternate address space where every word address corresponds to a single bit in the physical address space.
However bit access of a GPIO port is not the same as a serial port; that refers to a port where multiple data bits are sent one at a time, as opposed to multiple data bits simultaneously.
Moreover the terms parallel-port and serial-port imply some form of block or stream data transfer as opposed to control I/O where each bit controls one thing, such as your example of the LED on/off - there the LED is not "receiving data" it is simply being switched on and off. This is normally referred to as digital I/O (or DIO). In this context you might refer to a digital I/O port; a term that distinguishes it from analogue I/O where the voltage on the pin can be set or measured as opposed to just two states high/low.
Myself and my team are new to Kollmorgen AKD Basic motor drive and are working with this drive for the first time using TCP/IP protocol interface with LabVIEW.
We could write/set various variables succesfully but are facing an issue while reading settings and variables from the drive. Problem we are facing is because we are not getting exact number of bytes to read from Kollmorgen AKD Basic drive for a particular command. Actual number of bytes written and returned back by Kollmorgen AKD Basic drive is different than what is documented. e.g. As per Kollmorgen AKD Basic drive documentation, read request to read value stored in USER.INT6 variable should write back a DWORD or 4 Octates. If USER.INT6 variable contains value of 1, then I am getting value of '{CR}{LF}--' when I read 4 bytes. If I try to read 8 bytes, then I get '{CR}{LF}-->1{CR}{LF}' Where {CR} is 'carriage return' character and {LF} is 'line feed' character. If USER.INT1 contains value of 100, then I am getting value of '{CR}{LF}-->100'on reading 8 bytes. And so if USER.INT6 contains value of 1000, then I have to read 9 bytes.
This is happening to all other variables as well. Real problem is I don't know at run-time exactly what value a variable would have and to get complete value how many bytes I need to read. I am sure I am not the first to face this issue, and there would be a way to overcome it. So seeking help of seasoned experts. Please let me know.
Thanks and Regards,
Sandeep
I have no experience with that particular device, but in general, if it doesn't return a known number of bytes, then you're basically down to reading one byte at a time until you see the terminator.
In the specific case of CRLF, you can configure the TCP Read primitive to use a terminated mode using the mode input, so I believe that should work in your case, but I never tried it myself.
I would suggest altering the TCP/IP read mode from standard to CRLF, I have a feeling that your device terminates the messages with a CRLF string. If you insert a large enough number of bytes to read (eg. 20), it will try to read all those bytes or until it receives the CRLF combo.
Could you alter the display to HEX, I have a feeling that your --> actually are the number of bytes in the response.
It would help if you post your code!
From a quick glance at the Kollmorgan site looks like this drive uses Modbus TCP/IP. I suggest using the LabVIEW Modbus Library http://sine.ni.com/nips/cds/view/p/lang/en/nid/201711
Check out Modbus on Wiki to learn the specs http://en.wikipedia.org/wiki/Modbus
You can get the support for this from Kollmorgen itself. They have Application Engineers based in Pune.
I am reading binary data from NSInputStream that is written from a third party source (e.g. hardware) through external accessory framework and convert it to string. Is there endianness issue that I should be concerned about, i.e. should I ask the hardware provider what endianness they are using when they send their string in?
As people have said, probably not if you're using ASCII or UTF8. Most processors nowadays, even in dedicated devices, use little endian.
On an unrelated note, if you're doing networking via external accessories, you might have to watch for network and host byte orders.
As I searched in forums I learned that for tacking gps I must send coordinates ovew internet connection or SMS .But as I know we can cominicate via radio waves sending voice ,pics,data .And can I use this for getting data from gps device?Because Ham radio is free.
There are radio bands considered "unlicensed" that are free to use if your transmitter falls within regulated limits. These are mainly "line-of-sight" bands. Common WiFi and bluetooth radios are examples of standardized packet radios that work in the 2.4GHz unlicensed band.
It is not difficult to find similar devices in the 902MHz band, including standardized ZigBee mesh radio equipment.
Licensed amateur radio operators enjoy some advantages the unlicensed devices cannot provide, such as higher power limits and more diverse frequency choices. But these privileges come with restrictions - for example autonomous operation is not permitted in the "shortwave" bands, and operation for any commercial purpose is prohibited.
As Adam mentioned, The APRS standard is a de-facto standard for the format of informational beacons and the method for repeating them across the amateur packet radio network.
From your post I feel that you want "wide-area" service that you can track a roaming device with. Although many areas have existing APRS "digipeaters" setup by local hams, they are all voluntary as the bands can't be used for commercial purposes. As a licensed operator, you could of course setup your own repeaters.
Many types of communications are prohibited on the Amateur Radio bands, but this leaves plenty of room for hobbyist and personal research efforts, and as an Extra class ham I would welcome your project!
The search term you want to use is "APRS" - Automatic packet reporting system. Many people and companies already have GPS to radio interfaces that work with the HAM APRS system so you can track vehicles and other objects (such as balloons) through this HAM radio network.
The gps will probably have a serial connection prinitng out the position as a standard NMEA string.
There are a few protocols for sending RS-232 Ascii over ham radio - start here
Amateur radio is not really free - there are obviously no carrier fees, but you need a license to transmit and approved equipment. It's not a free-for-all.
Looking at the data-link level standards, such as PPP general frame format or Ethernet, it's not clear what happens if the checksum is invalid. How does the protocol know where the next frame begins?
Does it just scan for the next occurrence of "flag" (in the case of PPP)? If so, what happens if the packet payload just so happens to contain "flag" itself? My point is that, whether packet-framing or "length" fields are used, it's not clear how to recover from invalid packets where the "length" field could be corrupt or the "framing" bytes could just so happen to be part of the packet payload.
UPDATE: I found what I was looking for (which isn't strictly what I asked about) by looking up "GFP CRC-based framing". According to Communication networks
The GFP receiver synchronizes to the GFP frame boundary through a three-state process. The receiver is initially in the hunt state where it examines four bytes at a time to see if the CRC computed over the first two bytes equals the contents of the next two bytes. If no match is found the GFP moves forward by one byte as GFP assumes octet synchronous transmission given by the physical layer. When the receiver finds a match it moves to the pre-sync state. While in this intermediate state the receiver uses the tentative PLI (payload length indicator) field to determine the location of the next frame boundary. If a target number N of successful frame detection has been achieved, then the receiver moves into the sync state. The sync state is the normal state where the receiver examines each PLI, validates it using cHEC (core header error checking), extracts the payload, and proceeds to the next frame.
In short, each packet begins with "length" and "CRC(length)". There is no need to escape any characters and the packet length is known ahead of time.
There seems to be two major approaches to packet framing:
encoding schemes (bit/byte stuffing, Manchester encoding, 4b5b, 8b10b, etc)
unmodified data + checksum (GFP)
The former is safer, the latter is more efficient. Both are prone to errors if the payload just happens to contain a valid packet and line corruption causes the proceeding bytes to contain the "start of frame" byte sequence but that sounds highly improbable. It's difficult to find hard numbers for GFP's robustness, but a lot of modern protocols seem to use it so one can assume that they know what they're doing.
Both PPP and Ethernet have mechanisms for framing - that is, for breaking a stream of bits up into frames, in such a way that if a receiver loses track of what's what, it can pick up at the start of the next frame. These sit right at the bottom of the protocol stack; all the other details of the protocol are built on the idea of frames. In particular, the preamble, LCP, and FCS are at a higher level, and are not used to control framing.
PPP, over serial links like dialup, is framed using HDLC-like framing. A byte value of 0x7e, called a flag sequence, indicates the start of the frame. The frame continues until the next flag byte. Any occurrence of the flag byte in the content of the frame is escaped. Escaping is done by writing 0x7d, known as the control escape byte, followed by the byte to be escaped xor'd with 0x20. The flag sequence is escaped to 0x5e; the control escape itself also has to be escaped, to 0x5d. Other values can also be escaped if their presence would upset the modem. As a result, if a receiver loses synchronisation, it can just read and discard bytes until it sees a 0x7e, at which points it knows it's at the start of a frame again. The contents of the frame are then structured, containing some odd little fields that aren't really important, but are retained from an earlier IBM protocol, along with the PPP packet (called a protocol data unit, PDU), and also the frame check sequence (FCS).
Ethernet uses a logically similar approach, of having symbols which are recognisable as frame start and end markers rather than data, but rather than having reserved bytes plus an escape mechanism, it uses a coding scheme which is able to express special control symbols that are distinct from data bytes - a bit like using punctuation to break up a sequence of letters. The details of the system used vary with the speed.
Standard (10 Mb/s) ethernet is encoded using a thing called Manchester encoding, in which each bit to be transmitted is represented as two successive levels on the line, in such a way that there is always a transition between levels in every bit, which helps the receiver to stay synchronised. Frame boundaries are indicated by violating the encoding rule, leading to there being a bit with no transition (i read this in a book years ago, but can't find a citation online - i might be wrong about this). In effect, this system expands the binary code to three symbols - 0, 1, and violation.
Fast (100 Mb/s) ethernet uses a different coding scheme, based on a 5b/4b code, where groups of four data bits (nybbles) are represented as groups of five bits on the wire, and transmitted directly, without the Manchester scheme. The expansion to five bits lets the sixteen necessary patterns used be chosen to fulfil the requirement for frequent level transitions, again to help the receiver stay synchronised. However, there's still room to choose some extra symbols, which can be transmitted but don't correspond to data value, in effect, expanding the set of nybbles to twenty-four symbols - the nybbles 0 to F, and symbols called Q, I, J, K, T, R, S and H. Ethernet uses a JK pair to mark frame starts, and TR to mark frame ends.
Gigabit ethernet is similar to fast ethernet, but with a different coding scheme - the optical fibre versions use an 8b/10b code instead of the 5b/4b code, and the twisted-pair version uses some very complex quinary code arrangement which i don't really understand. Both approaches yield the same result, which is the ability to transmit either data bytes or one of a small set of additional special symbols, and those special symbols are used for framing.
On top of this basic framing structure, there is then a fixed preamble, followed by a frame delimiter, and some control fields of varying pointlessness (hello, LLC/SNAP!). Validity of these fields can be used to validate the frame, but they can't be used to define frames on their own.
You're pretty close to the correct answer already. Basically if it starts with a preamble and ends in something that matches as a checksum, it's a frame and passed up to higher layers.
PPP and ethernet both look for the next frame start signal. In the case of Ethernet, it's the preamble, a sequence of 64 alternating bits. If an ethernet decoder sees that, it simply assumes what follows is a frame. By capturing the bits and then checking if the checksum matches, it decides if it has a valid frame.
As for the payload containing the FLAG, in PPP it is escaped with additional bytes to prevent such misinterpretation.
As far as I know, PPP only supports error detection, and does not support any form of error correction or recovery.
Backed up by Cisco here: http://www.cisco.com/en/US/docs/internetworking/technology/handbook/PPP.html
This Wikipedia PPP line activation section describes the basics of RFC 1661.
A Frame Check sequence is used to detect transmission errors in a frame (described in the earlier Encapsulation section).
The diagram from RFC 1661 on this Wikipedia page describes how the Network protocol phase can restart with Link Establishment on an error.
Also, notes from the Cisco page referred by Suvesh.
PPP Link-Control Protocol
The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases.
First, link establishment and configuration negotiation occur. Before any network layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters. This phase is complete when a configuration-acknowledgment frame has been both sent and received.
This is followed by link quality determination. LCP allows an optional link quality determination phase following the link-establishment and configuration-negotiation phase. In this phase, the link is tested to determine whether the link quality is sufficient to bring up network layer protocols. This phase is optional. LCP can delay transmission of network layer protocol information until this phase is complete.
At this point, network layer protocol configuration negotiation occurs. After LCP has finished the link quality determination phase, network layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network layer protocols so that they can take appropriate action.
Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.
Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage and debug a link.
These frames are used to accomplish the work of each of the LCP phases.