Uart on Atmel Studio SAML21 Xplained with example project? - uart

I want to send a variable given by the function at30tse_read_temperature, on the example project of Atmel Studio called "AT30TSE75X_TEMP_SENSO".
I added the usart with some help and ASF wizard, and it's working when I call it with printf("something") (I have the output on my PC with Tera Term).
I want to send the variable output of a function.
Here is my code:
temp_res = at30tse_read_temperature();
double *temp_result = &temp_res;
//printf("%f", at30tse_read_temperature());
u_char *c_Data = ( u_char* ) &temp_result;
//usart_serial_write_packet(&cdc_uart_module,c_Data,sizeof(temp_result));
usart_serial_write_packet(&cdc_uart_module,c_Data,sizeof(c_Data));
I'm totally out... I don't know how to send the variable of a function I called 3 lines above...
I'm pretty sure it's easy, and I made a lot of huge mistakes...Please help me !
It's the example project on atmel studio, with uart added.

Related

Printf / show values on MPLAB X IDE - dsPIC33F

I'm a real PIC beginner and I'm trying to use a development board with a dsPIC33F, using the MPLAB X IDE and CANalyzer. I send CAN frames with the dsPIC but the data read by CANalyzer is not the one I tried to send. Is ther any way I can check my variables values or my frames on MPLAB IDE ?
I tried to use printf following the instructions on forums but it seems can't open an UART window :/
I send a frame with a DLC = 0x001 and a data = 0x002, the ID and DLC read by CANanalyzer are correct but it says that the data received is "00".
RESOLVED (Frames init problem)

Reading TI Temperature sensor with I2C

I'm brand new to i2c and have an idea of what's required to read from a slave device using a master, however I am struggling to successfully code my pseudocode.
I'm wanting to read the local temperature sensor on the TI TMP468 using the TMS50ls1227PGE's i2c pins. I've enabled the i2c driver, pinmux, and enabled all of the i2c interrupts in HalCoGen. For reference, all non-standard functions are defined in TI's i2c.c file.
In CCS, I've typed the following code within a FreeRTOS task (I did i2cInit() in main):
i2cSetMode(i2cREG1, I2C_MASTER);
/* Set direction to receiver */
i2cSetDirection(i2cREG1, I2C_TRANSMITTER);
i2cSetStart(i2cREG1); //start bit
i2cSetSlaveAdd(i2cREG1, 0b1001000); //address of temp sensor
while(i2cIsTxReady(i2cREG1)){}; //wait until flag is clear
i2cSendByte(i2cREG1, 0b00000000); //register to read from
i2cSetStart(i2cREG1);
i2cSendByte(i2cREG1, 0b10010001); //read from sensor
i2cSetDirection(i2cREG1, I2C_RECEIVER);
while(i2cIsRxReady(i2cREG1)){};
data = i2cReceiveByte(i2cREG1); //read data
i2cSetStop(i2cREG1); //stop bit
I'm not seeing any activity in my data variable and when I debug my code is always stuck inside of void i2cInterrupt() at:
uint32 vec = (i2cREG1->IVR & 0x00000007U);
I'm not really sure where to go from here, any ideas or obvious mistakes?
Thanks!
I would recommend first trying to read from a register with non-changing contents: in the case of tmp468, that would be address FE, which contains Manufacturer's ID of 0x5449. That will allow you to "wring out" your I2C code.
(By the way, you may also want to ask this question in the http://e2e.ti.com/support/microcontrollers/hercules/ forum.)

how to perform floating point calculation in tmote sky (contiki)

I have the following code snippet:
#include "contiki.h"
#include <stdio.h> /* For printf() */
PROCESS(calc_process, "calc process");
AUTOSTART_PROCESSES(&calc_process);
PROCESS_THREAD(calc_process, ev, data)
{
double dec=13.2, res=0, div=3.2;
PROCESS_BEGIN();
res=dec+div;
printf("%f",res);
PROCESS_END();
}
After uploading the above code in Tmote sky platform using the command
make TARGET=sky calc.upload, the program will be loaded to the mote (there is no error). Then login to the mote using make login TARGET=sky, the following output is displayed....
OUPUT:
**Rime started with address 4.0
MAC 04:00:00:00:00:00:00:00 Contiki 2.7 started. Node id is set to 4.
CSMA ContikiMAC, channel check rate 8 Hz, radio channel 26
Starting 'calc process'
%f**
How can I get the correct value?
Thanks
It is not floating point calculation support that you need - you have that already. What is missing is floating point support within printf(). That is to say that res will be calculated correctly, but printf() does not support its display.
Because it requires a relatively large amount of code, many microcontroller targeted libraries omit floating point support in stdio. There may be a library build option to include floating point support - refer to the library documentation.
You might do well to ask a question about the specific calculation necessary, and how it might be done using integer or fixed point arithmetic. Alternatively you might write your own floating point display as described here: How to print floating point value using putchar? for example.

Confused by the report ID when using HIDAPI through USB

I am a USB HID newbie and I am trying to use the HIDAPI for my application.
I have a question about using HIDAPI (in Visual Studio) regarding the report ID.
When I try to use the HIDAPI and connect to the Microchip Custom Demo,
I am confused about this aspect: The 65-byte report does not make sense to me!
Even if I don't want to set a report ID, I need to set the first byte to 0 and send the 65-byte buffer to the device, but I only receive 64 bytes of data from the Microchip device (because the report is 64 bytes long).
It looks like:
**Host** **Device**
*write_hid*
65 byte --------------->
*read_hid*
<------------------ 64byte
However, it seems weird to me.
Isn't the report that is sent or received always 64 bytes? Because the specifications say the report should have a 64-byte maximum and be sent every 1 ms.
If the answer is yes, why does the API maintain 65 bytes for 1-byte report ID?
Is the report ID contained in the 64 bytes?
The 65-byte data length does not make sense to me.
If your application does not include a Report ID in the HID descriptor, then there shouldn't be a Report ID prepended.
As you can see in the documentation of hid_write, HIDAPI should only send 64-bytes if the first byte is 0 (i.e. there is no Report ID):
unsigned char data[65];
buf[0] = 0; /* Single report */
// Fill report starting at buf[1]
hid_write(device, data, sizeof(data));
When looking at the source code for the libusb implementation, you can see that the Report ID is correctly stripped. On Windows however, the data is passed straight to Windows. I don't know Windows programming, but perhaps this makes a difference. Try testing this on Linux instead.

How do I get the Keil RealView MDK-ARM toolchain to link a region for execution in one area in memory but have it store it in another?

I'm writing a program that updates flash memory. While I'm erasing/writing flash I would like to be executing from RAM. Ideally I'd link my code to an execution region that's stored in flash that on startup I would copy to the RAM location to which it is linked.
I don't include any of the normal generated C/C++ initialization code so I can't just tag my function as __ram.
If I could do the above then the debuggers symbols would be relevant for the copied to RAM code and I'd be able to debug business as usual.
I'm thinking that something along the lines of OVERLAY/RELOC might help but I'm not sure.
Thanks,
Maybe your application code can do it manually. Something like
pSourceAddr = &FunctionInFlash;
pDestAddr = &RamReservedForFunction;
while(pSourceAddr <= (&FunctionInFlash+FunctionSize))
{ *pDestAddr++ = *pSourceAddr++;
};
typedef int (*RamFuncPtr)(int arg1); //or whatever the signature is..
result = ((RamFuncPtr)&RamReservedForFunction)(argument1);
You should be able to get the linker definition file to export symbols for the FunctionInFlash and RamReservedForFunction addresses.