Interpret return value (dissenter) when trying to unmount volume in OS X - objective-c

I'm trying to unmount a volume in my Cocoa application using the Disk Arbitration Framework.
Before calling:
DADiskUnmount(disk,
kDADiskUnmountOptionDefault,
unmountCallback,
self );
I register a callback function that get's called afterwards:
void unmountCallback(DADiskRef disk, DADissenterRef dissenter, void *context )
{
if (dissenter != NULL)
{
DAReturn ret = DADissenterGetStatus(dissenter);
switch (ret) {
case kDAReturnBusy:
printf("kDAReturnBusy\n");
break;
}
}
In this function I try to interpret the dissenter return value but get stuck. I suppose it should be of type DAReturn and have a value like kDAReturnBusy But when e.g. iTunes is using the volume and it can not be unmounted "ret" has a value of 0xc010 that I don't quite understand.
In case unmounting fails I'd like to find out why the volume can't be unmounted and in case another application is using it remind the user of closing this application.

But when e.g. iTunes is using the volume and it can not be unmounted "ret" has a value of 0xc010 that I don't quite understand.
The documentation you linked to, for the DAReturn type, lists all the Disk Arbitration constants as looking like this:
kDAReturnError = err_local | err_local_diskarbitration | 0x01, /* ( 0xF8DA0001 ) */
So, DA's error returns are all made of three components, OR'd together.
If you look at the documentation for DADissenterGetStatus, it says:
A BSD return code, if applicable, is encoded with unix_err().
If you then search the headers for unix_err, you find it in /usr/include/mach/error.h, which says:
/* unix errors get lumped into one subsystem */
#define unix_err(errno) (err_kern|err_sub(3)|errno)
and:
/*
* error number layout as follows:
*
* hi lo
* | system(6) | subsystem(12) | code(14) |
*/
There's those three components again. Some other macros in error.h arrange the system and subsystem values (e.g., err_kern and err_sub(3)) into those positions.
So now, let's open the Calculator, press ⌘3 to put it into programmer mode, switch it to base-16, and type in your error code, and see what it says:
0xC010
0000 0000 0000 0000 1100 0000 0001 0000
31 15 0
Breaking that apart according to the above layout, we find:
0000 00
31
System: 0, which error.h says is err_kern. This error came from the kernel.
00 0000 0000 11
31 15
Subsystem: 3 (0b11). This plus the system code matches the aforementioned definition of unix_err. So this is a BSD return code, as DADissenterGetStatus said.
00 0000 0001 0000
31 15 0
Individual error code: 16 (0x10, 0b10000).
UNIX/BSD errors are defined in <sys/errno.h>, which says:
#define EBUSY 16 /* Device / Resource busy */
This suggests to me that you can't unmount that device because it's in use.

the above post nicely explains how to find out information about the error code which you have seeing.
however, how to actually solve the issue with unmount failing due to EBUSY?
if you don't care about processes that might still be using the mounted volume, you can just force the dismount by changing:
DADiskUnmount(disk, kDADiskUnmountOptionDefault...)
to
DADiskUnmount(disk, kDADiskUnmountOptionForce...)
your idea of "reminding the user of closing this application" is more complicated to implement. if you really want to go that way, i guess you could parse the output of /usr/sbin/lsof to find the 'offending' process names

Related

STM32CubeMX USB CDC VCP?

I've found large number of examples, but nothing on how to do it "properly" from STM32MXCube.
How do I create skeleton code from STM32CubeMX for USB CDC virtual COM port communications (if possible STM32F4 Discovery)?
A STM32CubeMX project for Discovery F4 with CDC as USB device should work out of the box. Assuming you use an up-to-date STM32CubeMX and library:
Start STM32CubeMX
Select the board Discovery F4
Enable peripheral UBS_OTG_FS device only (leave over stuff uncheck)
Enable midlleware USB_Device Communication .. .aka CDC
In the clock tab check the clock source is HSE HCLK. It shall give 168 MHz HLCK and 48 MHz in the 48 MHz (USB). Check there is no red anywhere.
Save the project
Generate code (I used SW4STM32 toolchains)
Build (you may need to switch to internal CDT builder vs. GNU make).
Now add some code to send data over the COM port and voila it should work.
Actually, the tricky part is not try to make any "CDC" access until the host USB connects (no CDC setup yet)
Here is how I did it for quick emit test:
In file usbd_cdc_if.c
uint8_t CDC_Transmit_FS(uint8_t* Buf, uint16_t Len)
{
uint8_t result = USBD_OK;
/* USER CODE BEGIN 7 */
if (hUsbDevice_0 == NULL)
return -1;
USBD_CDC_SetTxBuffer(hUsbDevice_0, Buf, Len);
result = USBD_CDC_TransmitPacket(hUsbDevice_0);
/* USER CODE END 7 */
return result;
}
static int8_t CDC_DeInit_FS(void)
{
/* USER CODE BEGIN 4 */
hUsbDevice_0 = NULL;
return (USBD_OK);
/* USER CODE END 4 */
}
In file main.c
/* USER CODE BEGIN Includes */
#include "usbd_cdc_if.h"
/* USER CODE END Includes */
....
/* USER CODE BEGIN WHILE */
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
uint8_t HiMsg[] = "hello\r\n";
CDC_Transmit_FS(HiMsg, strlen(HiMsg));
HAL_Delay(200);
}
As soon you plug the micro USB (CN5) CDC data will start to show on the host terminal.
That works. I can see "hello" on the terminal (you may need to install a driver, http://www.st.com/web/en/catalog/tools/PF257938).
For reception, it needs to be first armed, say, started by a first call to USBD_CDC_ReceivePacket() in a good place. For that it can be CDC_Init_FS.
Then you can handle data as it arrives in CDC_Receive_FS and rearming reception again from here.
That works for me.
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
USBD_CDC_ReceivePacket(hUsbDevice_0);
return (USBD_OK);
/* USER CODE END 6 */
}
static int8_t CDC_Init_FS(void)
{
hUsbDevice_0 = &hUsbDeviceFS;
/* USER CODE BEGIN 3 */
/* Set Application Buffers */
USBD_CDC_SetTxBuffer(hUsbDevice_0, UserTxBufferFS, 0);
USBD_CDC_SetRxBuffer(hUsbDevice_0, UserRxBufferFS);
USBD_CDC_ReceivePacket(hUsbDevice_0);
return (USBD_OK);
/* USER CODE END 3 */
}
There are a number of STM32F4 Discovery boards supported by the STM32Cube software, and you haven’t said which you’re using, but I’ve had exactly the same issue with the Discovery board with the F401VCT MCU.
After installing the STM virtual COM port driver, Windows Device Manager showed a STMicroelectronics virtual COM port, but with a yellow warning mark. The COM port was not accessible with a terminal application (PuTTY).
I eventually found that there is a problem with source code output from the STMCube program. But there is a simple fix:
Open a new STM32Cube project and enable the USB_OTG_FS as Device
Only and select CDC Virtual Port COM from the MiddleWares
USB_Device drop-down.
Generate the source code with no other changes needed to any USB settings.
In file usbd_cdc_if.c, change #define USB_HS_MAX_PACKET_SIZE from 512 to 256.
In file usbd_cdc.c, change the #define CDC_DATA_HS_MAX_PACKET_SIZE from 512 to 256.
After doing this, the yellow warning disappeared from Device Manager, and I could receive data at the CDC_Receive_FS function (in usbd_cdc_if.c file) when using PuTTY. Be aware that these definitions return to their incorrect values each time the STM32Cube generates code, and I haven’t found a way around this yet.
I hope this helps.
iChal's fix worked to remove the yellow warning mark.
I would like to mention that USB_HS_MAX_PACKET_SIZE is now in usbd_def.h and CDC_DATA_HS_MAX_PACKET_SIZE is in usbd_cdc.h
I am using STM32CubeMX v4.11.0 STM32Cube v1.0 and the STM32F401C-DISCO.
On further work, I now only have to set the heap size to a larger value.
I am setting it to 0x600 as I also have FreeRTOS enabled. I am using IAR EWARM, so the change is made in linker script stm32f401xc_flash.icf.

STM32 flashing disabled after flashing a code without R/W protection

I have some experience of StdPeriph libraries usage for programming stm32. But now I tried STM32Cube HAL with STM32CubeMX code generator. I generated a project with this options:
Middleware: FreeRTOS and FatFS via SDIO
Compiler is GCC
stm32f103ret6 MCU
I imported generated code to Eclipse environment. I made a binary and flashed it with "st-flash write ..." as usual. My test program successfuly wrote to USART1 "Hello" in cycle - this is no problem. But then, when I tried to flash another code, it failed with "unknown chip id". If I manually connect NRST to GND, st-flash gives:
...Flash: 0 bytes (0 KiB) in pages of 2048 bytes
Full output:
2015-06-14T16:07:29 INFO src/stlink-common.c: Loading device parameters....
2015-06-14T16:07:29 INFO src/stlink-common.c: Device connected is: F1 High-density device, id 0x10036414
2015-06-14T16:07:29 INFO src/stlink-common.c: SRAM size: 0x10000 bytes (64 KiB), Flash: 0 bytes (0 KiB) in pages of 2048 bytes
I tried to use ST-Link Utility from Windows, but it cannot connect to this MCU to change option bytes (connection to another devices with stm32 works well).
I tried to flash through USART1, but it failed.
Source code I flashed, of course, does not contain any read/write protection enabling. I tried 2 another MCU, but this error was reproduced.
How can I unbrick by MCUs and flash anything?
I found a root cause!
This is a HAL initialization function, generated by STM32CubeMX:
void HAL_MspInit(void)
{
/* USER CODE BEGIN MspInit 0 */
/* USER CODE END MspInit 0 */
__HAL_RCC_AFIO_CLK_ENABLE();
HAL_NVIC_SetPriorityGrouping(NVIC_PRIORITYGROUP_4);
/* System interrupt init*/
/* SysTick_IRQn interrupt configuration */
HAL_NVIC_SetPriority(SysTick_IRQn, 0, 0);
/**DISABLE: JTAG-DP Disabled and SW-DP Disabled
*/
__HAL_AFIO_REMAP_SWJ_DISABLE();
/* USER CODE BEGIN MspInit 1 */
/* USER CODE END MspInit 1 */
}
I didn't notice this simple lines!
/**DISABLE: JTAG-DP Disabled and SW-DP Disabled
*/
__HAL_AFIO_REMAP_SWJ_DISABLE();
This macros totally disables SWD and JTAG programming, look at stm321xx_hal_gpio_ex.h:
#define __HAL_AFIO_REMAP_SWJ_DISABLE() MODIFY_REG(AFIO->MAPR, AFIO_MAPR_SWJ_CFG, AFIO_MAPR_SWJ_CFG_DISABLE)
I didn't found any checkbox in CubeMX to disable/enable SWD/JTAG, so this is the only behavior of code generator! Pay attention to this point when using STM32CubeMX!
If you set the pin assignments for the JTAG/SWD pins correctly (e.g. SYS_JTDI, SYS_JTDO-TRACESWO, etc.) on the pinout tab of STM32CubeMX, the generated code will not disable JTAG/SWD.
It's (BURIED) under Pinout | SYS | Debug of STM32CubeMX...set to Serial Wire or whatever.

Difference Between POS Entry Modes (Field 22)

I was wondering if anyone could help me understand difference between ISO 8583 Field 22 i.e. POS Entry Mode. I already know that:
52 means ICC Card
80 in case of fallback
But what I want to know is difference between
22 (Magnetic Stripe)
and 90
Can anyone help me on this?
The length of Field 22 usually 3-digits (or 4-digits in case it is BCD packed into two Bytes) in protocols based on ISO 8583:1987 or 12-digits in case protocols based on ISO 8583:1993 version. Customized protocols could use different sub-fields content and values meaning behind.
While you use short values in the requested question, I guess, your Field 22 based on ISO 8583:1987 version and you lost the leading and/or ending zero. So, your sample values becomes 3 digits length - 052, 800, 022, and 090 or 900.
Usually the 3-digits Field 22 splited into two sub-fields:
Position 1 and 2 - Personal Account Number (PAN) Entry (or capability);
Position 3 - Personal Identification Number (PIN) Entry (or capability);
Here are the possible interpretations:
02 - PAN auto-entry via magnetic stripe, track data is not required, 2 - no PIN.
05 - PAN auto-entry via chip, 2 - no PIN.
09 - E-Commerce, 0 - unknown PIN capability.
80 - Fallback to magnetic stripe, 0 - unknown PIN capability.
90 - PAN auto-entry via magnetic stripe, track data should be transmitted within the authorization request, 0 - unknown PIN capability.
etc.
90 used in case track data present in the ISO 8583 request message, 02 - if, for same reason, acquirer or terminal device not qualified to transfer track data in the request messages.
Depending of protocol requirements could be exceptions with Field 22 values. It is usually checked during the terminal device and communication interface certifications.
I will elaborate few things here. From above comments I can see that 09 is for E commerce transactions,but as per my knowledge for E commerce transactions we should use PAN Entry mode as 01(manual entry). Because for card not present transactions entry mode has always in manually.
POS Entry mode says whether the particular transaction is E commerce or POS. The possible values are :
01 Manual entry
02 Magnetic Stripe,track 2 data will ignore
05 Smart card,track 2 data required
90 Magnetic stripe no track 2 data
91 contactless card
95 Smart card , track2 data not required
Thanks share your ideas on this

Addressing ECUs directly using ELM 327 dongle and ISO 9141

I have a VW Golf 4, which is quite old and talks KWP 2000 (ISO 9141) on its CAN bus. I use a dongle powered by ELM 327, connected to the OBD-2 port of the car.
I am trying to send messages individually to each ECU. I tried to change the header of the messages:
AT SH 48 XX F1 (I hoped XX would be the ECU ID; 48 is the flag for "use physical addressing"). Any command I issue (e.g. tried 3E for "tester present") returns NO DATA (I disabled automatic timeouts and set the timeout to maximum value).
Is there a way to send messages directly to the ECU? I am not interested in the set of data provided via OBD-2, neither do I want to re-flash the ECUs. At the moment I just try to find out which ECUs are available on the bus.
Thanks!
VW works on Transport Protocol TP 2.0, hence you need to initialize with 0x200 header.
https://jazdw.net/tp20
See above link for more info.

XBEE/ZIGBEE Wireless Module API <- VB Express:When I send an ND command to a remote endpoint I get?

when I send an API ND command to a remote endpoint I get ???
When I send an API ND command from a VB program using the following packet;
7E 00 05 08 01 4E 44 00 64
I get;
7E 05 3F 14 E4 41 3F
Its a response -- but not as I know it. Neither the checksum "3F" or command length "05" are comprehensible to me. On the other hand if I wait for more bytes by setting "Serialport1.ReceivedBytesThreshold" (threshold: 10 bytes in buffer > event is fired) to 10 the "SerialPort1.ReadExisting()" statement times out. Any suggestions for decoding? Both coordinator and endpoint are XBEE PRO S2Bs.
I don't think it makes sense to send ATND as a remote AT command, and it will probably be ignored on the remote node, or trigger node discovery at that node with the responses staying local.
It looks like your response is possibly dropping null bytes (0x00), like the MSB of length, and one more in the packet itself. I'm not familiar with a frame type of 0x3F though -- is it documented for that XBee module you're using?
After a node discovery, you should see multiple AT Response frames (type 0x88?) come back over some time (based on ATNT, I believe), until you get one with a short payload (indicating discovery is complete).