I would like to give my Bitmap a PropertyItem value but i am not really sure how to give it a System::String^ value.
System::Drawing::Imaging::PropertyItem^ propItem = gcnew System::Drawing::Imaging::PropertyItem;
System::String^ newValue = gcnew System::String("newValue");
propItem->Id = PropertyTagImageTitle;
propItem->Len = 9;
propItem->Type = PropertyTagTypeASCII;
propItem->Value = newValue;
bmp->SetPropertyItem(propItem);
"System::Drawing::Imaging::PropertyItem::Value::set" cannot be called with the given argument list."
Argument types are(System::String^)
Object type is System::Drawing::Imaging::PropertyItem^
Hans Passant's answer is correct. I've implemented it like followed:
System::Drawing::Image^ theImage = System::Drawing::Image::FromFile("C:\\image.png");
System::Text::Encoding^ utf8 = System::Text::Encoding::UTF8;
array<System::Drawing::Imaging::PropertyItem^>^ propItem = theImage->PropertyItems;
System::String^ newValue = gcnew System::String("newValue");
propItem->Id = PropertyTagImageTitle;
propItem[0]->Len = 18;
propItem->Type = PropertyTagTypeByte;
array<Char>^propItemValue = newValue->ToCharArray();
array<byte>^ utf8Bytes = utf8->GetBytes(propItemValue);
propItem[0]->Value = utf8Bytes;
theImage->SetPropertyItem(propItem[0]);
The Value property type is array<Byte>^. That can be anything you want. But a conversion is required and for a string you have to fret a great deal about the encoding you use.
The MSDN docs and PropertyTagTypeASCII makes no bones about it, you are expected to use Encoding::ASCII to make the conversion, use its GetBytes() method to generate the array. But that tends to be a problem, it is where you live, the world does not speak ASCII and you may have to violate the warranty.
In general it is poorly standardized for image meta data, the spec rarely goes beyond specifying an array of bytes and leave it unspecified how it should be interpreted. Practically you can choose between Encoding::Default and Encoding::UTF8 to make the conversion. Encoding::Default is pretty likely to produce a readable string on the same machine on which the image was generated. But if the image is going to travel around the planet then utf8 tends to be the better choice. YMMV. In Germany you'd want to check if glyphs like ß and Ü come out as expected.
Related
I am starting with Vulkan and I follow the Niko Kauppi's tutorial on Youtube.
I have an error when creating a device with vkCreateDevice, it returns VK_ERROR_EXTENSION_NOT_PRESENT
Here some part of my code:
The call to vkCreateDevice
_gpu_count = 0;
vkEnumeratePhysicalDevices(instance, &_gpu_count, nullptr);
std::vector<VkPhysicalDevice> gpu_list(_gpu_count);
vkEnumeratePhysicalDevices(instance, &_gpu_count, gpu_list.data());
_gpu = gpu_list[0];
vkGetPhysicalDeviceProperties(_gpu, &_gpu_properties);
VkDeviceCreateInfo device_create_info = _CreateDeviceInfo();
vulkanCheckError(vkCreateDevice(_gpu, &device_create_info, nullptr, &_device));
_gpu_count = 1 and _gpu_properties seems to recognize well my nvidia gpu (which is not up to date)
device_create_info
VkDeviceCreateInfo _createDeviceInfo;
_createDeviceInfo.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
_createDeviceInfo.queueCreateInfoCount = 1;
VkDeviceQueueCreateInfo _queueInfo = _CreateDeviceQueueInfo();
_createDeviceInfo.pQueueCreateInfos = &_queueInfo;
I don't understand the meaning of the error: "A requested extension is not supported" according to Khronos' doc.
Thanks for your help
VK_ERROR_EXTENSION_NOT_PRESENT is returned when one of the extensions in [enabledExtensionCount, ppEnabledExtensionNames] vector you provided is not supported by the driver (as queried by vkEnumerateDeviceExtensionProperties()).
Extensions can also have dependencies, so VK_ERROR_EXTENSION_NOT_PRESENT is also returned when an extension dependency of extension in the list is missing there too.
If you want no device extensions, make sure enabledExtensionCount of VkDeviceCreateInfo is 0 (and not e.g. some uninitialized value).
I assume 2. is the whole body of _CreateDeviceInfo(), which would confirm the "uninitialized value" suspicion.
Usually though you would want a swapchain extension there to be able to render to screen directly.
First of all, make sure your VkDeviceCreateInfo is zero filled, otherwise it may carry garbage to your VkCreateDevice() call.
Add following line just after declaring your VkDeviceCreateInfo:
memset ( &_createDeviceInfo, 0, sizeof(VkDeviceCreateInfo) );
Some extensions are absolutely necessary, as swapchain one.
To retrieve available extensions do this:
// Available extensions and layers names
const char* const* _ppExtensionNames = NULL;
// get extension names
uint32 _extensionCount = 0;
vkEnumerateDeviceExtensionProperties( _gpu, NULL, &_extensionCount, NULL);
std::vector<const char *> extNames;
std::vector<VkExtensionProperties> extProps(_extensionCount);
vkEnumerateDeviceExtensionProperties(_gpu, NULL, &_extensionCount, extProps.data());
for (uint32_t i = 0; i < _extensionCount; i++) {
extNames.push_back(extProps[i].extensionName);
}
_ppExtensionNames = extNames.data();
Once you have all extension names in _ppExtensionNames, pass it to your deviceCreateInfo struct:
VkDeviceCreateInfo device_create_info ...
[...]
device_create_info.enabledExtensionCount = _extensionCount;
device_create_info.ppEnabledExtensionNames = _ppExtensionNames;
[...]
vulkanCheckError(vkCreateDevice(_gpu, &device_create_info, nullptr, &_device));
I hope it helps.
Please double check above code, as I'm writing it by heart.
Usual URL shortening techniques use few characters of the usual URL-charset, because not need more. Typical short URL is http://domain/code, where code is a integer number. Suppose that I can use any base (base10, base16, base36, base62, etc.) to represent the number.
QR Code have many encoding modes, and we can optimize the QR Code (minimal version to obtain lowest density), so we can test pairs of baseX-modeY...
What is the best base-mode pair?
NOTES
A guess...
Two modes fit with the "URL shortening profile",
0010 - Alphanumeric encoding (11 bits per 2 characters)
0100- Byte encoding (8 bits per character)
My choice was "upper case base36" and Alphanumeric (that also encodes "/", ":", etc.), but not see any demonstration that it is always (for any URL-length) the best. There are some good Guide or Mathematical demonstration about this kind of optimization?
The ideal (perhaps impracticable)
There are another variation, "encoding modes can be mixed as needed within a QR symbol" (Wikipedia)... So, we can use also
HTTP://DOMAIN/ with Alphanumeric + change_mode + Numeric encoding (10 bits per 3 digits)
For long URLs (long integers), of course, this is the best solution (!), because use all charset, no loose... Is it?
The problem is that this kind of optimization (mixed mode) is not accessible in usual QRCode-image generators... it is practicable? There are one generator using correctally?
An alternative answer format
The (practicable) question is about best combination of base and mode, so we can express it as a (eg. Javascript) function,
function bestBaseMode(domain,number_range) {
var dom_len = domain.length;
var urlBase_len = dom_len+8; // 8 = "http://".length + "/".length;
var num_min = number_range[0];
var num_max = number_range[1];
// ... check optimal base and mode
return [base,mode];
}
Example-1: the domain is "bit.ly" and the code is a ISO3166-1-numeric country-code,
ranging from 4 to 894. So urlBase_len=14, num_min=4 and num_max=894.
Example-2: the domain is "postcode-resolver.org" and number_range parameter is the range of most frequent postal codes integer representations, for instance a statistically inferred range from ~999 to ~999999. So urlBase_len=27, num_min=999 and num_max=9999999.
Example-3: the domain is "my-example3.net" and number_range a double SHA-1 code, so a fixed length code with 40 bytes (2 concatenated hexadecimal 40 digits long numbers). So num_max=num_min=Math.pow(8,40).
Nobody want my bounty... I lost it, and now also need to do the work by myself ;-)
about the ideal
The goQR.me support reply the particular question about mixed encoding remembering that, unfortunately, it can't be used,
sorry, our api does not support mixed qr code encoding.
Even the standard may defined it. Real world QR code scanner apps
on mobile phone have tons of bugs, we would not recommend to rely
on this feature.
functional answer
This function show the answers in the console... It is a simplification and "brute force" solution.
/**
* Find the best base-mode pair for a short URL template as QR-Code.
* #param Msg for debug or report.
* #param domain the string of the internet domain
* #param digits10 the max. number of digits in a decimal representation
* #return array of objects with equivalent valid answers.
*/
function bestBaseMode(msg, domain,digits10) {
var commomBases= [2,8,10,16,36,60,62,64,124,248]; // your config
var dom_len = domain.length;
var urlBase_len = dom_len+8; // 8 = "http://".length + "/".length
var numb = parseFloat( "9".repeat(digits10) );
var scores = [];
var best = 99999;
for(i in commomBases) {
var b = commomBases[i];
// formula at http://math.stackexchange.com/a/335063
var digits = Math.floor(Math.log(numb) / Math.log(b)) + 1;
var mode = 'alpha';
var len = dom_len + digits;
var lost = 0;
if (b>36) {
mode = 'byte';
lost = parseInt( urlBase_len*0.25); // only 6 of 8 bits used at URL
}
var score = len+lost; // penalty
scores.push({BASE:b,MODE:mode,digits:digits,score:score});
if (score<best) best = score;
}
var r = [];
for(i in scores) {
if (scores[i].score==best) r.push(scores[i]);
}
return r;
}
Running the question examples:
var x = bestBaseMode("Example-1", "bit.ly",3);
console.log(JSON.stringify(x)) // "BASE":36,"MODE":"alpha","digits":2,"score":8
var x = bestBaseMode("Example-2", "postcode-resolver.org",7);
console.log(JSON.stringify(x)) // "BASE":36,"MODE":"alpha","digits":5,"score":26
var x = bestBaseMode("Example-3", "my-example3.net",97);
console.log(JSON.stringify(x)) // "BASE":248,"MODE":"byte","digits":41,"score":61
I am currently building a simple interpreter for this language for practice. The only problem left to overcome is reading a single byte as a character from user input. I have the following code so far, but I need a way to turn the String that the second lines makes into a u8 or another integer that I can cast:
let input = String::new()
let string = std::io::stdin().read_line(&mut input).ok().expect("Failed to read line");
let bytes = string.chars().nth(0) // Turn this to byte?
The value in bytes should be a u8 which I can cast to a i32 to use elsewhere. Perhaps there is a simpler way to do this, otherwise I will use any solution that works.
Reading just a byte and casting it to i32:
use std::io::Read;
let input: Option<i32> = std::io::stdin()
.bytes()
.next()
.and_then(|result| result.ok())
.map(|byte| byte as i32);
println!("{:?}", input);
First, make your input mutable, then use bytes() instead of chars().
let mut input = String::new();
let string = std::io::stdin().read_line(&mut input).ok().expect("Failed to read line");
let bytes = input.bytes().nth(0).expect("no byte read");
Please note that Rust strings are a sequence of UTF-8 codepoints, which are not necessarily byte-sized. Depending on what you are trying to achieve, using a char may be the better option.
-(void)InitWithPwd:(char *)pPwd
{
char szResult[17];
//generate md5 checksum
CC_MD5(pPwd, strlen(pPwd),&szResult[0]);
szResult[16] = 0;
m_csPasswordHash[0]=0;
for(int i = 0;i < 16;i++)
{
char sz[3] = {'\0'};
//crash in blow row. The first pass is ok. The third pass crash.
//I can't understand.
sprintf(&sz[0],"%2.2x",szResult[i]);
strcat(m_csPasswordHash,sz);
}
m_csPasswordHash[32] = 0;
printf("pass:%s\n",m_csPasswordHash);
m_ucPacketType = 1;
}
I want to get the md5 of the password. But above code crash again and again. I can't understand why.
Your buffer (sz) is too small, causing sprintf() to generate a buffer overflow which leads to undefined behavior, in your case a crash.
Note that szResult[1] might be a negative value when viewed as an int (which happens when passing a char-type value to sprintf()), which can cause sprintf() to disregard your field width and precision directives in order to format the full value.
Here is an example showing this problem. The example code is written in C, but that shouldn't matter for this case.
This solves the problem by making sure the incoming data is considered unsigned:
sprintf(sz, "%02x", (unsigned char) szResult[i]);
im learning cocos2d [open gl wrapper for objective C on iPhone], and now playing with sprites have found this in a example,
enum {
easySprite = 0x0000000a,
mediumSprite = 0x0000000b,
hardSprite = 0x0000000c,
backButton = 0x0000000d,
magneticSprite = 0x0000000e,
magneticSprite2 = 0x0000000f
};
...
-(id) init
{...
/second sprite
TSprite *med = [TSprite spriteWithFile:#"butonB.png"]; //blue
[med SetCanTrack:YES];
[self addChild: med z:1 tag:mediumSprite];
med.position=ccp(299,230);
[TSprite track:med];
so the variable defined in the enum is used in the tag name of the created sprite object,
but i don understand
why give values in hexa to the tags to use
the enum with out tags
as I knew this enum in obj C and C
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
thanks!
Usually, when you are creating an enum, you want to use it as a type (variable, method parameters etc.).
In this case, it's just a way how to declare integer constants. Since thay don't want to use the enum as type, the name is not necessary.
Edit:
Hexadecimal numbers are commonly used when the integer is a binary mask. You won't see any operators like +,-,*,/ used with such a number, you'll see bitwise operators (!, &, |, ^).
Every digit in a hexadecimal number represents 4 bits. The whole number is a 32-bit integer and by writing it in hexadecimal in this case, you are saying that you are using only the last four bits and the other bits can be used for something else. This wouldn't be obvious from a decimal number.
Enums are automatically assigned values, incremented from 0 but you can assign your own values.
If you don't specify any values they will be starting from 0 as in:
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
But you could assign them values:
typedef enum {
JPG = 0,
PNG = 1,
GIF = 2,
PVR = 3
} kImageType;
or even
typedef enum {
JPG = 100,
PNG = 0x01,
GIF = 100,
PVR = 0xff
} kImageType;
anything you want, repeating values are ok as well.
I'm not sure why they are given those specific values but they might have some meaning related to use.
Well, you seem to be working off a terrible example. :)
At least as far as enums are concerned. It's up to anyone to define the actual value of an enum entry, but there's no gain to use hex numbers and in particular there's no point in starting the hex numbers with a through f (10 to 15). The example will also work with this enum:
enum {
easySprite = 10,
mediumSprite,
hardSprite,
backButton,
magneticSprite,
magneticSprite2
};
And unless there's some point in having the enumeration start with value 10, it will probably work without specifying any concrete values.