How to solve "Encountered unresolved custom op: edgetpu-custom-op" exception - kotlin

Im trying to implement my custom ML model in a kotlin app.
I first make and train my model in GCP Vertex AI.
After my model was ready i've export it as tensor flow model to the edge and then upload to my Firebase Machine Learning proyect.
After that i follow the guide to implement a custom model of tensor flow lite on android.
Then when i execute my app it crash on this part of the code:
val conditions = CustomModelDownloadConditions.Builder()
.requireWifi() // Also possible: .requireCharging() and .requireDeviceIdle()
.build()
FirebaseModelDownloader.getInstance()
.getModel("your_model", DownloadType.LOCAL_MODEL_UPDATE_IN_BACKGROUND,
conditions)
.addOnSuccessListener { model: CustomModel? ->
// Download complete. Depending on your app, you could enable the ML
// feature, or switch from the local model to the remote model, etc.
// The CustomModel object contains the local path of the model file,
// which you can use to instantiate a TensorFlow Lite interpreter.
val modelFile = model?.file
if (modelFile != null) {
interpreter = Interpreter(modelFile) // this line crash
}
}
More specific at the line "interpreter = Interpreter(modelFile)".
I get the following exception:
java.lang.IllegalStateException: Internal error: Unexpected failure
when preparing tensor allocations: Encountered unresolved custom op:
edgetpu-custom-op. See instructions:
https://www.tensorflow.org/lite/guide/ops_custom Node number 0
(edgetpu-custom-op) failed to prepare.
What is the meaning of this error? How can i solve it?

Related

Not able to use MockK in Android Espresso UI Testing

I am getting a error when trying to use MockK in UI test which was perfectly working in Unittest cases
MockK could not self-attach a jvmti agent to the current VM
Full error report
Caused by: io.mockk.proxy.MockKAgentException: MockK could not self-attach a jvmti agent to the current VM. This feature is required for inline mocking.
This error occured due to an I/O error during the creation of this agent: java.io.IOException: Unable to dlopen libmockkjvmtiagent.so: dlopen failed: library "libmockkjvmtiagent.so" not found
Potentially, the current VM does not support the jvmti API correctly
at io.mockk.proxy.android.AndroidMockKAgentFactory.init(AndroidMockKAgentFactory.kt:67)
at io.mockk.impl.JvmMockKGateway.<init>(JvmMockKGateway.kt:46)
at io.mockk.impl.JvmMockKGateway.<clinit>(JvmMockKGateway.kt:186)
... 30 more
Caused by: java.io.IOException: Unable to dlopen libmockkjvmtiagent.so: dlopen failed: library "libmockkjvmtiagent.so" not found
at dalvik.system.VMDebug.nativeAttachAgent(Native Method)
at dalvik.system.VMDebug.attachAgent(VMDebug.java:693)
at android.os.Debug.attachJvmtiAgent(Debug.java:2617)
at io.mockk.proxy.android.JvmtiAgent.<init>(JvmtiAgent.kt:48)
at io.mockk.proxy.android.AndroidMockKAgentFactory.init(AndroidMockKAgentFactory.kt:40)
Let me know is there any other way to initialize the MockK to make use in Espresso
When tried to add
androidTestImplementation "org.mockito:mockito-inline:$mockitoVersion"
Observed this error
2 files found with path 'mockito-extensions/org.mockito.plugins.MockMaker'.
Adding a packagingOptions block may help, please refer to
https://developer.android.com/reference/tools/gradle-api/7.2/com/android/build/api/dsl/ResourcesPackagingOptions
for more information
Versions
mockk version = 1.12.4
Android = 32
kotlin_version = '1.6.21'
Code which causes this issue when added in android UI testcases(Espresso)
val presenter = mockk<LoginPresenter>()
val view = mockk<LoginView>()
How to perform a mock api call like this
val presenter = mockk<LoginPresenter>()
val view = mockk<LoginView>()
onView(withId(R.id.button_login)).perform(loginClick())
But i want mock api to be called
instead of loginClick() in perform() can i call some how the below execution
so that my app uses mock api's
or is there any way to make my entire testcase file use mockk data
every { presenter.onLoginButtonClicked("bc#mail.com","Abc123") } returns view.onCognitoLoginSuccess()
For me adding this solved the problem:
android {
testOptions {
packagingOptions {
jniLibs {
useLegacyPackaging = true
}
}
}
}
I found this here. Hope it helps.
Accorfing to here :
Instrumented Android tests are all failing due to issue with mockk
1.12.4
I used io.mockk:mockk-android:1.12.4 and i have same issue..
SOLUTION:
I change the version of io.mockk:mockk-android to 1.12
3 and test runed fine for me
androidTestImplementation "io.mockk:mockk-android:1.12.3"

Is it necessary to make the tensorflow serving system a lib jar package?

First,I have to approval that the structure of the serving is very good. But in some scenario for example 'picture object detection' ,when a picture comes ,the picture need to be processed by many models, if we loop send the image to the remote server ,and wait the returns, it would be get a large delay, and Image transfer is very resource intensive。
Because our company use java to provide external RPC services,so
I packaged the tensorflow serving into a '.so' lib ,
then I provide a java api ,Java calls the native method of the c lib package
so users can call serving locally as if they were calling serving remotely. and At the same time, saving the time of remote transmission。 below is my java structure:
java
and the code in java is very simple:
public class TensorflowServerPredictorImpl {
static {
try {
NativeLibLoader.initLoad();
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
private final long handle;//server c impl handle
public native long init(byte[] options);
private native byte[] predict(long handle,byte[] request);
public TensorflowServerPredictorImpl(ServerOptions.ServerConfig config){
handle = init(config.toByteArray());
}
public Predict.PredictResponse Predict(Predict.PredictRequest request)throws Exception{
byte[] requestByteArray = request.toByteArray();
byte[] responseByteArray = predict(this.handle,requestByteArray);
Predict.PredictResponse response = Predict.PredictResponse.parseFrom(responseByteArray);
return response;
}
}
and the use of lib is just like:
public class Test{
public static void main(String[] args)throws Exception{
URL url = Test.class.getResource("/");
String path = url.getPath()+"model_config_file.cfg";
ServerOptions.ServerConfig.Builder builder = ServerOptions.ServerConfig.newBuilder();
builder.setModelConfigFile(path);
ServerOptions.ServerConfig config = builder.build();
TensorflowServerPredictorImpl predictor = new TensorflowServerPredictorImpl(config);
Predict.PredictRequest request =buildRequest(1);
Predict.PredictResponse response = predictor.Predict(request);
}
the predictor Support for multithreading。
How do other people solve such problems? Does it make sense for me to do this?
I had a similar problem to solve and installed a tensorflow serving application in a docker container. The requests for classification (in my case time series) are sent to the serving via grpc (protobuf) which is a binary format. This worked well so far. Although making the protobuf interface work in java was quite a steep learning phase.
But now I have also to serve several models in different technologies (tensoflow, python sklearn, ...) and some models cannot be served by tensorflow serving. So I have to setup a broker application in python which receives the requests from the java application and sends them out to many models. A new nice feature of the broker is that it can now do a voting over the results of the different models and send back only the voting result to java.

Can't read files inside native service app - Gear Fit 2

I have very strange issue on Gear Fit 2. I have native service in hybrid app (web app + native service in one package) and can't access to files. I got error message: Permission denied.
The code stops at getting folder attributes - line: if(stat(path,$st) == -1).
I don't know what is wrong. The code read files without problem when it is inside UI app but doesn't want to work in hybrid native service. Are there any constraints about reading files from service app on Gear Fit 2?
Of course I added privileges to tizen-manifes.xml.
http://tizen.org/privilege/mediastorage
http://tizen.org/privilege/externalstorage
Code:
char *path;
storage_get_root_directory(STORAGE_TYPE_INTERNAL, &path); // works OK
// path contains now: /opt/usr/media
struct stat st;
if (stat(path, &st) == -1) { // STOP HERE, stat() returns -1
return; // code enter here and ending
}
if (S_ISDIR(st.st_mode)) {
... // not enter here
}
...
after calling
stat(path, &st); // it sets errno if fail
strerror(errno) returns message: Permission denied.

Realm throws exception with empty unit test

In an Objective-C project, we started writing our new Unit Tests in Swift. I'm just now trying to create our first Unit Test of successfully saving the results of a parsed JSON. However, the test already fails during setup() due to the following error:
[ProjectTests.Project testInitializingOverlayCollectionCreatesAppropriateRealmObjects] : failed: caught "NSInvalidArgumentException", "+[RLMObjectBase ignoredProperties]: unrecognized selector sent to class 0x759b70
So apparently it tries to execute ignoredProperties on the RLMObjectBase class, and that method isn't implemented yet. Not sure how this happens, because I have yet to initialise anything, beyond creating a RLMRealms object with a random in-memory identifier.
ProjectTests.swift
import XCTest
class ProjectOverlayCollectionTests: XCTestCase {
var realm: RLMRealm!
override func setUp() {
super.setUp()
// Put setup code here. This method is called before the invocation of each test method in the class.
let realmConfig = RLMRealmConfiguration()
realmConfig.inMemoryIdentifier = NSUUID().UUIDString
do {
realm = try RLMRealm(configuration: realmConfig) // <-- Crashes here.
}
catch _ as NSError {
XCTFail()
}
}
override func tearDown() {
// Put teardown code here. This method is called after the invocation of each test method in the class.
super.tearDown()
}
func testInitializingOverlayCollectionCreatesAppropriateRealmObjects() {
XCTAssertTrue(true)
}
}
Project-Bridging-Header.h
#import <Realm/Realm.h>
Podfile
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '7.1'
def shared_pods
pod 'Realm', '0.95.0'
end
target 'Project' do
shared_pods
end
target 'ProjectTests' do
shared_pods
end
As mentioned in the Realm documentation;
Avoid Linking Realm and Tested Code in Test Target
Remove the Realm pod from the ProjectTests target and all is right with the world.
Update: This answer is outdated. As #rommex mentions in a comment, following the current Realm installation documentation should link it to both your module and test targets without problems. However, I have not checked this.

Can you test SetUp success/failure in Google Test?

Is there a way to check that SetUp code has actually worked properly in GTest fixtures, so that the whole fixture or test-application can be marked as failed rather than get weird test results and/or have to explicitly check this in each test?
If you put your fixture setup code into a SetUp method, and it fails and issues a fatal failure (ASSERT_XXX or FAIL macros), Google Test will not run your test body. So all you have to write is
class MyTestCase : public testing::Test {
protected:
bool InitMyTestData() { ... }
virtual void SetUp() {
ASSERT_TRUE(InitMyTestData());
}
};
TEST_F(MyTestCase, Foo) { ... }
Then MyTestCase.Foo will not execute if InitMyTestData() returns false. If you already have nonfatal assertions in your setup code (i.e., EXPECT_XXX or ADD_FAILURE), you can generate a fatal assertion from them by writing ASSERT_FALSE(HasFailure()); You can find more info on failure detection in the Google Test Advanced Guide wiki page.