what is si7021 sensor i2c command sequence - embedded

From the manual shown bellow the address is 0x40 the measure temperature command 0xE3.
From the readback diagram
first master sends slave address and measure command,then We send slave Adress.
But we cant see in the diagram  where is the measured data,how it transfered back?
The manual doesnt say what is MS Byte,MS Byte.
Thanks. 
https://www.silabs.com/documents/public/data-sheets/Si7021-A20.pdf

The measured data is contained in the MS Byte and the LS Byte. The measured data is a two-byte value. MS Byte is the most significant byte. LS Byte is the least significant byte.
In the diagram, the unshaded cells represent data transmitted from the master to the slave.
And shaded cells represent data received by the master from the slave.

Related

Is Cyclic Redundancy Check (CRC) able to detect the wrong sequence of data?

We are using a CRC to detect errors in a set of data which is transferred over a bus. The byte-wise CRC of the entire data calculated in the source and proved in the destination. It may happen that the order of data changed during transfer. Is CRC able to detect wrong data sequence?
I personally think that CRC is not able to detect this because it is a XOR based operation, but I cannot find a reference in the literature.
Yes. (Almost always.) A CRC is not like a simple checksum, where the operations on the bytes are commutative. A CRC is based on exclusive-or's and shifts, not just exclusive-ors. Any swap of two adjacent bytes will always be detected by any CRC of 16 bits or more.

What costs more data, ASCII or HEX?

I'm dealing with a device that has both options to send data through UDP connection. As I couldn't find any comparison or something, could someone explain the difference in processing both?
Hex data transfers a byte as two hex characters, using only 4 bits of the available 8 bits. Ascii data transfers either 7bits or 8bits at a time, thus using the full range of 0..255 while a hex character only transfers 0..15.
For example, the number 18 is transferred as 12 hex coded (taking up two bytes) but as 18 ascii-encoded(taking up one byte 00010002).

Is redis statitic value instantaneous_output_kbps in Bytes or Bits

When using the redis-cli INFO command you get an ouput for
instantaneous_output_kbps and instantaneous_input_kbps, are those statistics measured in bytes or bits?
it's measured in bytes, even though it is not documented on the redis website.
This is how redis tracks those internally (see server.c, line 954):
trackInstantaneousMetric(STATS_METRIC_NET_INPUT,
server.stat_net_input_bytes);
trackInstantaneousMetric(STATS_METRIC_NET_OUTPUT,
server.stat_net_output_bytes);
this is tracked in bytes, and the trackInstantaneousMetric doesn't manipulate the data in any way. It's basically a moving average on the network IO that's measured in bytes.

Is using implied data in a message to compute a CRC a good design strategy?

We are sending UDP messages from one device to another. There is a timestamp in the message and transmitted in a 16 bit field. The receiver keeps track of the number of times the field "rolls over" so that time spans that require more than 16 bits can be tracked. The protocol designer has decided that we should use the entire 32 bit timestamp to compute the CRC for the message. Is this a good idea? Note that we have a message period that is much smaller than the "roll over" period.
Since you are apparently in control of the messages, you should just transmit the 32-bit timestamp in the message.
What is the size of the CRC? If it is a 16-bit CRC, you could forgo the error detection function completely and solve the equations to get the missing 16-bits of the timestamp from the transmitted message and the CRC. But if you're going to do that, why not just send the other 16-bits of the timestamp directly in the CRC field, instead of a CRC?
If it is a 32-bit CRC, you could again solve, and be left with 16-bits of "strength" in the error detection, instead of 32. Again, one would have to ask why you wouldn't just send the other 16-bits of the timestamp and put a 16-bit CRC in what remains.
Or if you can change the length of the message, just add the missing 16 bits of timestamp, leaving the CRC and its error-detection capability intact.

What are multiplexed and non-multiplexed address pins?

In the below quesiton
A DRAM has 11 multiplexed address pin and one data input/output pin.
14-non multiplexed address pin and 4 data input/output pin Determine
the organization of the DRAM
what are multiplexed address pins and non-multiplexed address pins?
Multiplexed address pins in the context of DRAM means that you can address a row or a column on the same set of pins. First you'd write the row address to the pins and assert RAS (row address select) to tell DRAM to latch the data for the row. Then you'd place the column address on the pins and assert CAS (column address select) to tell the DRAM to latch the data for the column. At this point, the DRAM will read or write data for that row:column on the in/out pin, depending on what you've told it to do with your R/W select pin.
Non-multiplexed pins mean that the row and column are encoded in the entire address. You write the address, and the DRAM reads or writes a data word at that address.
From this info you can probably figure out the total address space. And your data width is given, right?
Here's a paper that explains multiplexed DRAM in more detail. And if you're still having trouble, you may be able to find more info in What Every Programmer Should Know About Memory, Chapter 2.