Using f_mount to read and write data to text file - usb

In my Application I need to open, read and write data to a text file using the calls f_open, f_read, and f_write.
It is failing to open the .txt file
res = f_open(&f_header.file, file_path, FA_OPEN_EXISTING | FA_WRITE | FA__WRITTEN | FA_READ | FA_CREATE_NEW );
printf("res value after f open %d \n\r",res);
if (res != FR_OK) {
printf("Failed to open %s, error %d\n\r", file_path, res);
}
This is giving error:
FR_NOT_ENABLED, /* (12) The volume has no work area */
For solving this error application program needs to perform f_mount function after each media change to force cleared the filesystem object.
How to use f_mount() call in this application to solve this issue?
I'm not clear about the 2nd parameter.
I added this f_mount(&fs0, "0://", 1); to solve this issue.
Before the f_open call. It is not taking f_mount() call also.
res=f_mount(&fs0,"0://", 1);
res = f_open(&f_header.file, file_path, FA_OPEN_EXISTING | FA_WRITE | FA__WRITTEN | FA_READ | FA_CREATE_NEW );
The code is stopping while run time before the f_mount()
Here is the source code for f_mount which I'm using:
FRESULT f_mount (
FATFS* fs, /* Pointer to the file system object (NULL:unmount)*/
const TCHAR* path, /* Logical drive number to be mounted/unmounted */
BYTE opt /* 0:Do not mount (delayed mount), 1:Mount immediately */
)
{
FATFS *cfs;
int vol;
FRESULT res;
const TCHAR *rp = path;
vol = get_ldnumber(&rp);
if (vol < 0) return FR_INVALID_DRIVE;
cfs = FatFs[vol]; /* Pointer to fs object */
if (cfs) {
#if _FS_LOCK
clear_lock(cfs);
#endif
#if _FS_REENTRANT /* Discard sync object of the current volume */
if (!ff_del_syncobj(cfs->sobj)) return FR_INT_ERR;
#endif
cfs->fs_type = 0; /* Clear old fs object */
}
if (fs) {
fs->fs_type = 0; /* Clear new fs object */
#if _FS_REENTRANT /* Create sync object for the new volume */
if (!ff_cre_syncobj((BYTE)vol, &fs->sobj)) return FR_INT_ERR;
#endif
}
FatFs[vol] = fs; /* Register new fs object */
if (!fs || opt != 1) return FR_OK; /* Do not mount now, it will be mounted later */
res = find_volume(&fs, &path, 0); /* Force mounted the volume */
LEAVE_FF(fs, res);
}
The code is not showing any error/warnings at the time of make file.
I'm sure there is no problem with the code.
There is nothing wrong with the code.
Is this some problem related to the memory allocation or out of memory in emmc. What are the possible reason for this behaviour.

According to http://elm-chan.org/fsw/ff/doc/mount.html:
FRESULT f_mount (
FATFS* fs, /* [IN] Filesystem object */
const TCHAR* path, /* [IN] Logical drive number */
BYTE opt /* [IN] Initialization option */
);
Parameters
fs
Pointer to the filesystem object to be registered and cleared. Null pointer unregisters the registered filesystem object.
path
Pointer to the null-terminated string that specifies the logical drive. The string without drive number means the default drive.
opt
Mounting option. 0: Do not mount now (to be mounted on the first access to the volume), 1: Force mounted the volume to check if it is ready to work.
In other words, the second parameter is how you want to refer to this particular filesystem when later working with it.
For example, mounting it like so:
f_mount(&fs0, "0://", 1);
you would then be able to open files like this:
f_open(fp, "0://path/to/file", FA_CREATE_ALWAYS);

Related

STM32L496 - FLASH Page Erase No Effect

I have a project originally built for the STM32L432 that I am porting to the STM32L496. However, I am having an issue where the HAL_FLASHEx_Erase function provided by ST as part of the HAL works on one device but not the other. The two FLASH spaces of these micros are organized virtually identically (same 72-bit wide data read/writes). Only difference is the L496 has a second bank of FLASH. I have seen some users run into issues with this; I am NOT attempting to use Bank 2 or come anywhere close to it; my last address is at 0x801FFFF.
I can manually erase the FLASH using the STCube Programmer, which fills it with FFs. This then satisfies the requirement for the FLASH to be "erased" before writing, and I can write one block of data. But I cannot modify it again via code. As soon as I write the first time, I cannot clear the block of data that I'd just written (verified using the Memory window view in IAR).
Again, exact same piece of code works for one L-series, but not another. Anyone have any ideas?
HAL_StatusTypeDef HAL_FLASHEx_Erase(FLASH_EraseInitTypeDef *pEraseInit, uint32_t *PageError)
{
HAL_StatusTypeDef status = HAL_ERROR;
uint32_t page_index = 0;
/* Process Locked */
__HAL_LOCK(&pFlash);
/* Check the parameters */
assert_param(IS_FLASH_TYPEERASE(pEraseInit->TypeErase));
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
if (status == HAL_OK)
{
pFlash.ErrorCode = HAL_FLASH_ERROR_NONE;
if (pEraseInit->TypeErase == FLASH_TYPEERASE_MASSERASE)
{
/* Mass erase to be done */
FLASH_MassErase(pEraseInit->Banks);
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
#if defined(STM32L471xx) || defined(STM32L475xx) || defined(STM32L476xx) || defined(STM32L485xx) || defined(STM32L486xx)
/* If the erase operation is completed, disable the MER1 and MER2 Bits */
CLEAR_BIT(FLASH->CR, (FLASH_CR_MER1 | FLASH_CR_MER2));
#else
/* If the erase operation is completed, disable the MER1 Bit */
CLEAR_BIT(FLASH->CR, (FLASH_CR_MER1));
#endif
}
else
{
/*Initialization of PageError variable*/
*PageError = 0xFFFFFFFF;
for(page_index = pEraseInit->Page; page_index < (pEraseInit->Page + pEraseInit->NbPages); page_index++)
{
FLASH_PageErase(page_index, pEraseInit->Banks);
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
/* If the erase operation is completed, disable the PER Bit */
CLEAR_BIT(FLASH->CR, (FLASH_CR_PER | FLASH_CR_PNB));
if (status != HAL_OK)
{
/* In case of error, stop erase procedure and return the faulty address */
*PageError = page_index;
break;
}
}
}
/* Flush the caches to be sure of the data consistency */
FLASH_FlushCaches();
}
/* Process Unlocked */
__HAL_UNLOCK(&pFlash);
return status;
}

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

STM32F4: SD-Card using FatFs and USB fails

(also asked on SE: Electrical Engineering)
In my application, I've set up a STM32F4, SD-Card and USB-CDC (all with CubeMX).
Using a PC, I send commands to the STM32, which then does things on the SD-Card.
The commands are handled using a "communicationBuffer" (implemented by me) which waits for commands over USB, UART, ... and sets a flag, when a \n character was received. The main loop polls for this flag and if it is set, a parser handles the command. So far, so good.
When I send commands via UART, it works fine, and I can get a list of the files on the SD-Card or perform other access via FatFs without a problem.
The problem occurs, when I receive a command via USB-CDC. The parser works as expected, but FatFs claims FR_NO_FILESYSTEM (13) in f_opendir.
Also other FatFs commands fail with this error-code.
After one failed USB-command, commands via UART will also fail. It seems, as if the USB somehow crashes the initialized SD-Card-driver.
Any idea how I can resolve this behaviour? Or a starting point for debugging?
My USB-Implementation:
I'm using CubeMX, and therefore use the prescribed way to initialize the USB-CDC interface:
main() calls MX_USB_DEVICE_Init(void).
In usbd_conf.c I've got:
void HAL_PCD_MspInit(PCD_HandleTypeDef* pcdHandle)
{
GPIO_InitTypeDef GPIO_InitStruct;
if(pcdHandle->Instance==USB_OTG_FS)
{
/* USER CODE BEGIN USB_OTG_FS_MspInit 0 */
/* USER CODE END USB_OTG_FS_MspInit 0 */
/**USB_OTG_FS GPIO Configuration
PA11 ------> USB_OTG_FS_DM
PA12 ------> USB_OTG_FS_DP
*/
GPIO_InitStruct.Pin = OTG_FS_DM_Pin|OTG_FS_DP_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.Alternate = GPIO_AF10_OTG_FS;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* Peripheral clock enable */
__HAL_RCC_USB_OTG_FS_CLK_ENABLE();
/* Peripheral interrupt init */
HAL_NVIC_SetPriority(OTG_FS_IRQn, 7, 1);
HAL_NVIC_EnableIRQ(OTG_FS_IRQn);
/* USER CODE BEGIN USB_OTG_FS_MspInit 1 */
/* USER CODE END USB_OTG_FS_MspInit 1 */
}
}
and the receive-process is implemented in usbd_cdc_if.c as follows:
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
mRootObject->mUsbBuffer->fillBuffer(Buf, *Len);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
return (USBD_OK);
/* USER CODE END 6 */
}
fillBuffer is implemented as follows (I use the same implementation for UART and USB transfer - with separate instances for the respective interfaces. mBuf is an instance-variable of type std::vector<char>):
void commBuf::fillBuffer(uint8_t *buf, size_t len)
{
// Check if last fill has timed out
if(SystemTime::getMS() - lastActionTime > timeout) {
mBuf.clear();
}
lastActionTime = SystemTime::getMS();
// Fill new content
mBuf.insert(mBuf.end(), buf, buf + len);
uint32_t done = 0;
while(!done) {
for(auto i = mBuf.end() - len, ee = mBuf.end(); i != ee; ++i) {
if(*i == '\n') {
newCommand = true;
myCommand = std::string((char*) &mBuf[0],i - mBuf.begin() + 1);
mBuf.erase(mBuf.begin(), mBuf.begin() + (i - mBuf.begin() + 1));
break;
}
}
done = 1;
}
}
I resolved the problem:
In usb_cdc_if.c the #define APP_RX_DATA_SIZE was set to 4 (for some unknown reason). As this is lower than the packet size, incoming packets of a larger size than 4 bytes were overwriting my memory.
It happened, that the following portion of my memory was the FATFS* FatFs[] pointer-list to the initialized FATFS-Filesystem structs.
So subsequently the address to this struct was overwritten, when a command of 5 or more bytes arrived.
Phew, that was a tough one.

transform javascript to opcode using spidermonkey

i am new to spider monkey and want to use it for transform java script file to sequence of byte code.
i get spider monkey and build it in debug mode.
i want to use JS_CompileScript function in jsapi.h to compile javascript code and analysis this to get bytecode , but when in compile below code and run it , i get run time error.
the error is "Unhandled exception at 0x0f55c020 (mozjs185-1.0.dll) in spiderMonkeyTest.exe: 0xC0000005: Access violation reading location 0x00000d4c." and i do not resolve it.
any body can help me to resolve this or introducing other solutions to get byte code from javascript code by using spider monkey ?
// spiderMonkeyTest.cpp : Defines the entry point for the console application.
//
#define XP_WIN
#include <iostream>
#include <fstream>
#include "stdafx.h"
#include "jsapi.h"
#include "jsanalyze.h"
using namespace std;
using namespace js;
static JSClass global_class = { "global",
JSCLASS_NEW_RESOLVE | JSCLASS_GLOBAL_FLAGS,
JS_PropertyStub,
NULL,
JS_PropertyStub,
JS_StrictPropertyStub,
JS_EnumerateStub,
JS_ResolveStub,
JS_ConvertStub,
NULL,
JSCLASS_NO_OPTIONAL_MEMBERS
};
int _tmain(int argc, _TCHAR* argv[]) {
/* Create a JS runtime. */
JSRuntime *rt = JS_NewRuntime(16L * 1024L * 1024L);
if (rt == NULL)
return 1;
/* Create a context. */
JSContext *cx = JS_NewContext(rt, 8192);
if (cx == NULL)
return 1;
JS_SetOptions(cx, JSOPTION_VAROBJFIX);
JSScript *script;
JSObject *obj;
const char *js = "function a() { var tmp; tmp = 1 + 2; temp = temp * 2; alert(tmp); return 1; }";
obj = JS_CompileScript(cx,JS_GetGlobalObject(cx),js,strlen(js),"code.js",NULL);
script = obj->getScript();
if (script == NULL)
return JS_FALSE; /* compilation error */
js::analyze::Script *sc = new js::analyze::Script();
sc->analyze(cx,script);
JS_DestroyContext(cx);
JS_DestroyRuntime(rt);
/* Shut down the JS engine. */
JS_ShutDown();
return 1;
}
Which version of Spidermonkey are you using? I am using the one that comes with FireFox 10 so the API may be different.
You should create a new global object and initialize it by calling JS_NewCompartmentAndGlobalObject() and JS_InitStandardClasses() before compiling your script :
.....
/*
* Create the global object in a new compartment.
* You always need a global object per context.
*/
global = JS_NewCompartmentAndGlobalObject(cx, &global_class, NULL);
if (global == NULL)
return 1;
/*
* Populate the global object with the standard JavaScript
* function and object classes, such as Object, Array, Date.
*/
if (!JS_InitStandardClasses(cx, global))
return 1;
......
Note, the function JS_NewCompartmentAndGlobalObject() is obsolete now, check the latest JSAPI documentation for the version your are using. Your JS_CompileScript() call just try to retrieve a global object which has not been created and probably this causes the exception.
how about using function "SaveCompiled" ? it will save object/op-code (compiled javascript) to file

How to write a Linux Driver, that only forwards file operations?

I need to implement a Linux Kernel Driver, that (in the first step) only forwards all file operations to another file (in later steps, this should be managed and manipulated, but I don't want to discuss this here).
My idea is the following, but when reading, the kernel crashes:
static struct {
struct file *file;
char *file_name;
int open;
} file_out_data = {
.file_name = "/any_file",
.open = 0,
};
int memory_open(struct inode *inode, struct file *filp) {
PRINTK("<1>open memory module\n");
/*
* We don't want to talk to two processes at the same time
*/
if (file_out_data.open)
return -EBUSY;
/*
* Initialize the message
*/
Message_Ptr = Message;
try_module_get(THIS_MODULE);
file_out_data.file = filp_open(file_out_data.file_name, filp->f_flags, filp->f_mode); //here should be another return handling in case of fail
file_out_data.open++;
/* Success */
return 0;
}
int memory_release(struct inode *inode, struct file *filp) {
PRINTK("<1>release memory module\n");
/*
* We're now ready for our next caller
*/
file_out_data.open--;
filp_close(file_out_data.file,NULL);
module_put(THIS_MODULE);
/* Success */
return 0;
}
ssize_t memory_read(struct file *filp, char *buf,
size_t count, loff_t *f_pos) {
PRINTK("<1>read memory module \n");
ret=file_out_data.file->f_op->read(file_out_data.file,buf,count,f_pos); //corrected one, false one is to find in the history
return ret;
}
So, can anyone please tell me why?
Don't use set_fs() as there is no reason to do it.
Use file->f_fop->read() instead of the vfs_read. Take a look at the file and file_operations structures.
Why are you incrementing file_out_data.open twice and decrementing it once? This could cause you to use file_out_data.file after it has been closed.
You want to write memory in your file ou read?
Because you are reading and not writing...
possible i'm wrong