TensorFlow Model is still floating point after Post-training quantization - tensorflow

After applying post-training quantization, my custom CNN model was shrinked to 1/4 of its original size (from 56.1MB to 14MB). I put the image(100x100x3) that is to be predicted into ByteBuffer as 100x100x3=30,000 bytes. However, I got the following error during inference:
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 120000 bytes and a ByteBuffer with 30000 bytes.**
at org.tensorflow.lite.Tensor.throwExceptionIfTypeIsIncompatible(Tensor.java:221)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:93)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:136)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:216)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:195)
at gov.nih.nlm.malaria_screener.imageProcessing.TFClassifier_Lite.recongnize(TFClassifier_Lite.java:102)
at gov.nih.nlm.malaria_screener.imageProcessing.TFClassifier_Lite.process_by_batch(TFClassifier_Lite.java:145)
at gov.nih.nlm.malaria_screener.Cells.runCells(Cells.java:269)
at gov.nih.nlm.malaria_screener.CameraActivity.ProcessThinSmearImage(CameraActivity.java:1020)
at gov.nih.nlm.malaria_screener.CameraActivity.access$600(CameraActivity.java:75)
at gov.nih.nlm.malaria_screener.CameraActivity$8.run(CameraActivity.java:810)
at java.lang.Thread.run(Thread.java:762)
The imput image size to the model is: 100x100x3. I'm currently predicting one image at a time. So, if I'm making the Bytebuffer: 100x100x3 = 30,000 bytes. However, the log info above says the TensorFlowLite buffer has 120,000 bytes. This makes me suspect that the converted tflite model is still in float format. Is this expected behavior? How can I get a quantized model that take input image in 8 pit precision like it does in the example from TensorFlow official repository ?
In the example code, the ByteBuffer used as input for tflite.run() is in 8 bit precision for the quantized model.
But I also read from the google doc saying, "At inference, weights are converted from 8-bits of precision to floating-point and computed using floating point kernels." This two instances seems to contradict each other.
private static final int BATCH_SIZE = 1;
private static final int DIM_IMG_SIZE = 100;
private static final int DIM_PIXEL_SIZE = 3;
private static final int BYTE_NUM = 1;
imgData = ByteBuffer.allocateDirect(BYTE_NUM * BATCH_SIZE * DIM_IMG_SIZE * DIM_IMG_SIZE * DIM_PIXEL_SIZE);
imgData.order(ByteOrder.nativeOrder());
... ...
int pixel = 0;
for (int i = 0; i < DIM_IMG_SIZE; ++i) {
for (int j = 0; j < DIM_IMG_SIZE; ++j) {
final int val = intValues[pixel++];
imgData.put((byte)((val >> 16) & 0xFF));
imgData.put((byte)((val >> 8) & 0xFF));
imgData.put((byte)(val & 0xFF));
// imgData.putFloat(((val >> 16) & 0xFF) / 255.0f);
// imgData.putFloat(((val >> 8) & 0xFF) / 255.0f);
// imgData.putFloat((val & 0xFF) / 255.0f);
}
}
... ...
tfLite.run(imgData, labelProb);
Post-training quantization code:
import tensorflow as tf
import sys
import os
saved_model_dir = '/home/yuh5/Downloads/malaria_thinsmear.h5.pb'
input_arrays = ["input_2"]
output_arrays = ["output_node0"]
converter = tf.contrib.lite.TocoConverter.from_frozen_graph(saved_model_dir, input_arrays, output_arrays)
converter.post_training_quantize = True
tflite_model = converter.convert()
open("thinSmear_100.tflite", "wb").write(tflite_model)

Post-training quantization does not change the format of the input or output layers. You can run your model with data in the same format as used for training.
You may look into quantization-aware training to generate fully-quantized models, but I have no experience with it.
As for the sentence "At inference, weights are converted from 8-bits of precision to floating-point and computed using floating point kernels." This means that the weights are "de-quantized" to floating point values in memory, and computed with FP instructions, instead of performing integer operations.

Related

Tensorflow Lite Model: Incompatible shapes for input and output array

I'm currently working on a Tensorflow Lite image classifier app that can recognice UNO cards. But when I'm running the float model in the class ImageClassifier, something is wrong.
The error is the next one:
java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (Identity) with shape [1, 10647, 4] to a Java object with shape [1, 15].
Here's the code that throw that error:
tflite.run(imgData, labelProbArray);
And this is how I have created imgData and labelProbArray:
private static final int DIM_BATCH_SIZE = 1;
private static final int DIM_PIXEL_SIZE = 3; //r+g+b = 1+1+1
static final int DIM_IMG_SIZE_X = 416;
static final int DIM_IMG_SIZE_Y = 416;
imgData = ByteBuffer.allocateDirect(DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE * 4); //The last value because size of float is 4
labelProbArray = new float[1][labelList.size()]; // {1, 15}
You can chech the inputs and ouputs of the .tflite file. Source.
I know you should create a buffer for the output values, but I tried to import this and didn't work:
import org.tensorflow.lite.support.tensorbuffer.TensorBuffer;
Any ideas? Thank you so much for read me^^
Edit v2:
Thanks to yyoon I realise that I didn't populate my model with metadata, so I run this line in my cmd:
python ./metadata_writer_for_image_classifier_uno.py \ --model_file=./model_without_metadata/custom.tflite \ --label_file=./model_without_metadata/labels.txt \ --export_directory=model_with_metadata
Before that, I modified this file with my data:
_MODEL_INFO = {
"custom.tflite":
ModelSpecificInfo(
name="UNO image classifier",
version="v1",
image_width=416,
image_height=416,
image_min=0,
image_max=255,
mean=[127.5],
std=[127.5],
num_classes=15)
}
And another error appeared:
ValueError: The number of output tensors (2) should match the number of output tensor metadata (1)
Idk why my model have 2 tensors outputs...

tflite uint8 quantization model input and output float conversion

I have successfully converted a quantized 8bit tflite model for object detection. My model was originally trained on images that are normalized by dividing 255 so the original input range is [0, 1]. Since my quantized tflite model requires input to be uint8, how can I convert my image (originally [0, 255]) to be correct for my network?
Also how can I convert output to float to compare the results with floating point model?
The following code does not give me the right result.
'''python
im = cv2.imread(image_path)
im = im.astype(np.float32, copy=False)
input_image = im
input_image = np.array(input_image, dtype=np.uint8)
input_image = np.expand_dims(input_image, axis=0)
interpreter.set_tensor(input_details[0]['index'], input_image)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
output_data2 = interpreter.get_tensor(output_details[1]['index'])
output_data3 = interpreter.get_tensor(output_details[2]['index'])
min_1 = -8.198164939880371
max_1 = 8.798029899597168
scale = (max_1 - min_1)/ 255.0
min_2 = -9.77856159210205
max_2 = 10.169703483581543
scale_2 = (max_2 - min_2) / 255.0
min_3 = -14.382895469665527
max_3 = 11.445544242858887
scale_3 = (max_3 - min_3) / 255.0
output_data = (output_data ) * scale + min_1
output_data2 = (output_data2) * scale_2 + min_2
output_data3 = (output_data3) * scale_3 + min_3
'''
i met the same problem but in pose estimation.
have you solved the problem yet?
you use quantized aware training?
i think you can get a q and z value(because you have to give mean and std-err when you use tflite api or toco commonad to get a quantized 8bit tflite model) about your input image.
try these codes:
image = q_input* (image - z_input)
output_data = q_output(image - z_output)
etc.
(for different layers you can access different q and z)
Let me know if you tried this way
I've converted the image via OpenCV to "CV_8UC3" and this worked for me:
// Convert to RGB color space
if (image.channels() == 1) {
cv::cvtColor(image, image, cv::COLOR_GRAY2RGB);
} else {
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
}
image.convertTo(image, CV_8UC3);

Tensorflow Lite: Cannot convert between a TensorFlowLite buffer and a ByteBuffer

I have tried to migrate a custom model to Android platform. The tensorflow version is 1.12. I used the command line recommended shown as below:
tflite_convert \
--output_file=test.tflite \
--graph_def_file=./models/test_model.pb \
--input_arrays=input_image \
--output_arrays=generated_image
to convert .pb file into tflite format.
I have checked input tensor shape of my .pb file in tensorboard:
dtype
{"type":"DT_FLOAT"}
shape
{"shape":{"dim":[{"size":474},{"size":712},{"size":3}]}}
Then I deploy tflite file upon Android, and allocate input ByteBuffer that planed to feed the model as:
imgData = ByteBuffer.allocateDirect(
4 * 1 * 712 * 474 * 3);
When I run the model on Android device the app crashed and then logcat prints like:
2019-03-04 10:31:46.822 17884-17884/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: main
Process: android.example.com.tflitecamerademo, PID: 17884
java.lang.RuntimeException: Unable to start activity ComponentInfo{android.example.com.tflitecamerademo/com.example.android.tflitecamerademo.CameraActivity}: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 786432 bytes and a ByteBuffer with 4049856 bytes.
It's so weird since allocated ByteBuffer is exactly the product of 4 * 3 * 474 * 712 whereas tensorflow lite buffer is not the multiple of 474 or 712. I don't figure out why tflite model got a wrong shape.
Thanks in advance if anyone can give a solution.
You could visualize the TFLite model to debug what buffer sizes are actually allocated to the input tensors.
TensorFlow Lite models can be visualized using the
visualize.py
script.
If the input tensor's buffer size isn't what you expect it to be, then there might be a bug with the conversion (or with the arguments provided to tflite_convert)
Hello guys,
I also had the similar problem yesterday. I would like to mention solution which works for me.
Seems like TSLite only support exact square bitmap inputs
Like
Size 256* 256 detection working
Size 256* 255 detection not working throwing exception
And max size which is supported
257*257 should be max width and height for any bitmap input
Here is the sample code to crop and resize bitmap
private var MODEL_HEIGHT = 257
private var MODEL_WIDTH = 257
Crop bitmap
val croppedBitmap = cropBitmap(bitmap)
Created scaled version of bitmap for model input
val scaledBitmap = Bitmap.createScaledBitmap(croppedBitmap, MODEL_WIDTH, MODEL_HEIGHT, true)
https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L578
Crop Bitmap to maintain aspect ratio of model input.
private fun cropBitmap(bitmap: Bitmap): Bitmap {
val bitmapRatio = bitmap.height.toFloat() / bitmap.width
val modelInputRatio = MODEL_HEIGHT.toFloat() / MODEL_WIDTH
var croppedBitmap = bitmap
// Acceptable difference between the modelInputRatio and bitmapRatio to skip cropping.
val maxDifference = 1e-5
// Checks if the bitmap has similar aspect ratio as the required model input.
when {
abs(modelInputRatio - bitmapRatio) < maxDifference -> return croppedBitmap
modelInputRatio < bitmapRatio -> {
// New image is taller so we are height constrained.
val cropHeight = bitmap.height - (bitmap.width.toFloat() / modelInputRatio)
croppedBitmap = Bitmap.createBitmap(
bitmap,
0,
(cropHeight / 2).toInt(),
bitmap.width,
(bitmap.height - cropHeight).toInt()
)
}
else -> {
val cropWidth = bitmap.width - (bitmap.height.toFloat() * modelInputRatio)
croppedBitmap = Bitmap.createBitmap(
bitmap,
(cropWidth / 2).toInt(),
0,
(bitmap.width - cropWidth).toInt(),
bitmap.height
)
}
}
return croppedBitmap
}
https://github.com/tensorflow/examples/blob/master/lite/examples/posenet/android/app/src/main/java/org/tensorflow/lite/examples/posenet/PosenetActivity.kt#L451
Thanks and Regards
Pankaj
I had changed the image dimensions from the standard 224 earlier in the model creation process to 299 for other reasons, so I just searched my Android Studio project for 224 and updated the two final references in ImageClassifier.java to 299, and I was back in business.

How to import an saved Tensorflow model train using tf.estimator and predict on input data

I have save the model using tf.estimator .method export_savedmodel as follows:
export_dir="exportModel/"
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400")
How can I import this saved model and use for predictions?
I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.
That particular example doesn't actually export a model, so let's do that (not need for use case 1):
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a "canned" estimator (such as DNNClassifier). For a workaround, see the "Appendix: Workaround" section.
The code below references export_dir (return value from the export step) to emphasize that it is not "/path/to/model", but rather, a subdirectory of that directory whose name is a timestamp.
Use Case 1: Perform prediction in the same process as training
This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:
classifier.train(input_fn=train_input_fn, steps=2000)
# [...snip...]
predictions = list(classifier.predict(input_fn=predict_input_fn))
predicted_classes = [p["classes"] for p in predictions]
Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions
Python Client
Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(export_dir)
predictions = predict_fn(
{"x": [[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]]})
print(predictions['scores'])
Java Client
package dummy;
import java.nio.FloatBuffer;
import java.util.Arrays;
import java.util.List;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
public class Client {
public static void main(String[] args) {
Session session = SavedModelBundle.load(args[0], "serve").session();
Tensor x =
Tensor.create(
new long[] {2, 4},
FloatBuffer.wrap(
new float[] {
6.4f, 3.2f, 4.5f, 1.5f,
5.8f, 3.1f, 5.0f, 1.7f
}));
// Doesn't look like Java has a good way to convert the
// input/output name ("x", "scores") to their underlying tensor,
// so we hard code them ("Placeholder:0", ...).
// You can inspect them on the command-line with saved_model_cli:
//
// $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default
final String xName = "Placeholder:0";
final String scoresName = "dnn/head/predictions/probabilities:0";
List<Tensor> outputs = session.runner()
.feed(xName, x)
.fetch(scoresName)
.run();
// Outer dimension is batch size; inner dimension is number of classes
float[][] scores = new float[2][3];
outputs.get(0).copyTo(scores);
System.out.println(Arrays.deepToString(scores));
}
}
C++ Client
You'll likely want to use tensorflow::LoadSavedModel with Session.
#include <unordered_set>
#include <utility>
#include <vector>
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/public/session.h"
namespace tf = tensorflow;
int main(int argc, char** argv) {
const string export_dir = argv[1];
tf::SavedModelBundle bundle;
tf::Status load_status = tf::LoadSavedModel(
tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle);
if (!load_status.ok()) {
std::cout << "Error loading model: " << load_status << std::endl;
return -1;
}
// We should get the signature out of MetaGraphDef, but that's a bit
// involved. We'll take a shortcut like we did in the Java example.
const string x_name = "Placeholder:0";
const string scores_name = "dnn/head/predictions/probabilities:0";
auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));
auto matrix = x.matrix<float>();
matrix(0, 0) = 6.4;
matrix(0, 1) = 3.2;
matrix(0, 2) = 4.5;
matrix(0, 3) = 1.5;
matrix(0, 1) = 5.8;
matrix(0, 2) = 3.1;
matrix(0, 3) = 5.0;
matrix(0, 4) = 1.7;
std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};
std::vector<tf::Tensor> outputs;
tf::Status run_status =
bundle.session->Run(inputs, {scores_name}, {}, &outputs);
if (!run_status.ok()) {
cout << "Error running session: " << run_status << std::endl;
return -1;
}
for (const auto& tensor : outputs) {
std::cout << tensor.matrix<float>() << std::endl;
}
}
Use Case 3: Serve a model using TensorFlow Serving
Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
example_bytestring = tf.placeholder(
shape=[None],
dtype=tf.string,
)
features = tf.parse_example(
example_bytestring,
tf.feature_column.make_parse_example_spec(feature_columns)
)
return tf.estimator.export.ServingInputReceiver(
features, {'examples': example_bytestring})
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:
# Omitting a bunch of connection/initialization code...
# But at some point we end up with a stub whose lifecycle
# is generally longer than that of a single request.
stub = create_stub(...)
# The actual values for prediction. We have two examples in this
# case, each consisting of a single, multi-dimensional feature `x`.
# This data here is the equivalent of the map passed to the
# `predict_fn` in use case #2.
examples = [
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),
]
# Build the RPC request.
predict_request = predict_pb2.PredictRequest()
predict_request.model_spec.name = "default"
predict_request.inputs["examples"].CopyFrom(
tensor_util.make_tensor_proto(examples, tf.float32))
# Perform the actual prediction.
stub.Predict(request, PREDICT_DEADLINE_SECS)
Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).
Appendix: Working around Exports from Canned Models in TF 1.3
There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for "custom" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:
# Build 3 layer DNN with 10, 20, 10 units respectively.
class Wrapper(tf.estimator.Estimator):
def __init__(self, **kwargs):
dnn = tf.estimator.DNNClassifier(**kwargs)
def model_fn(mode, features, labels):
spec = dnn._call_model_fn(features, labels, mode)
export_outputs = None
if spec.export_outputs:
export_outputs = {
"serving_default": tf.estimator.export.PredictOutput(
{"scores": spec.export_outputs["serving_default"].scores,
"classes": spec.export_outputs["serving_default"].classes})}
# Replace the 3rd argument (export_outputs)
copy = list(spec)
copy[4] = export_outputs
return tf.estimator.EstimatorSpec(mode, *copy)
super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config)
classifier = Wrapper(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
I dont think there is a bug with canned Estimators (or rather if there was ever one, it has been fixed). I was able to successfully export a canned estimator model using Python and import it in Java.
Here is my code to export the model:
a = tf.feature_column.numeric_column("a");
b = tf.feature_column.numeric_column("b");
feature_columns = [a, b];
model = tf.estimator.DNNClassifier(feature_columns=feature_columns ...);
# To export
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns);
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec);
servable_model_path = model.export_savedmodel(servable_model_dir, export_input_fn, as_text=True);
To import the model in Java, I used the Java client code provided by rhaertel80 above and it works. Hope this also answers Ben Fowler's question above.
It appears that the TensorFlow team does not agree that there is a bug in version 1.3 using canned estimators for exporting a model under use case #2. I submitted a bug report here:
https://github.com/tensorflow/tensorflow/issues/13477
The response I received from TensorFlow is that the input must only be a single string tensor. It appears that there may be a way to consolidate multiple features into a single string tensor using serialized TF.examples, but I have not found a clear method to do this. If anyone has code showing how to do this, I would be appreciative.
You need to export the saved model using tf.contrib.export_savedmodel and you need to define input receiver function to pass input to.
Later you can load the saved model ( generally saved.model.pb) from the disk and serve it.
TensorFlow: How to predict from a SavedModel?

PyOpenCL reduction Kernel on each pixel of image as array instead of each byte (RGB mode, 24 bits )

I'm trying to calculate the average Luminance of an RGB image. To do this, I find the luminance of each pixel i.e.
L(r,g,b) = X*r + Y*g + Z*b (some linear combination).
And then find the average by summing up luminance of all pixels and dividing by width*height.
To speed this up, I'm using pyopencl.reduction.ReductionKernel
The array I pass to it is a Single Dimension Numpy Array so it works just like the example given.
import Image
import numpy as np
im = Image.open('image_00000001.bmp')
data = np.asarray(im).reshape(-1) # so data is a single dimension list
# data.dtype is uint8, data.shape is (w*h*3, )
I want to incorporate the following code from the example into it . i.e. I would make changes to datatype and the type of arrays I'm passing. This is the example:
a = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
b = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
krnl = ReductionKernel(ctx, numpy.float32, neutral="0",
reduce_expr="a+b", map_expr="x[i]*y[i]",
arguments="__global float *x, __global float *y")
my_dot_prod = krnl(a, b).get()
Except, my map_expr will work on each pixel and convert each pixel to its luminance value.
And reduce expr remains the same.
The problem is, it works on each element in the array, and I need it to work on each pixel which is 3 consecutive elements at a time (RGB ).
One solution is to have three different arrays, one for R, one for G and one for B ,which would work, but is there another way ?
Edit: I changed the program to illustrate the char4 usage instead of float4:
import numpy as np
import pyopencl as cl
import pyopencl.array as cl_array
deviceID = 0
platformID = 0
workGroup=(1,1)
N = 10
testData = np.zeros(N, dtype=cl_array.vec.char4)
dev = cl.get_platforms()[platformID].get_devices()[deviceID]
ctx = cl.Context([dev])
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
Data_In = cl.Buffer(ctx, mf.READ_WRITE, testData.nbytes)
prg = cl.Program(ctx, """
__kernel void Pack_Cmplx( __global char4* Data_In, int N)
{
int gid = get_global_id(0);
//Data_In[gid] = 1; // This would change all components to one
Data_In[gid].x = 1; // changing single component
Data_In[gid].y = 2;
Data_In[gid].z = 3;
Data_In[gid].w = 4;
}
""").build()
prg.Pack_Cmplx(queue, (N,1), workGroup, Data_In, np.int32(N))
cl.enqueue_copy(queue, testData, Data_In)
print testData
I hope it helps.