Xamarin Test Cloud file upload - file-upload

I am running tests on Xamarin Test Cloud, where I am unable to Upload the images or files in script that I have written. The script gets stuck at a statement app.Tap(x => x.Text("Attach Image")). Where button event is clicked and options are given to upload the file from the gallery or camera...
app.Tap(x => x.Class("FormsImageView").Index(3));
// app.Repl();
app.Tap(x => x.Text("Loews Chicago O'Hare"));
//app.Tap(x => x.Text("Attach Image"));
//app.Tap(x => x.Text("Open Gallery"));
//app.Tap(x => x.Id("text1"));
app.Tap(x => x.Class("EditorEditText"));

If I'm understanding your approach correctly, then the problem is likely two-fold:
Xamarin.UITest cannot automate system apps like the Gallery or Camera. In order to run tests dependent on features of the system apps, the behavior has to actually be integrated into your app itself, so that it doesn't require launching a separate app; or you have to employ backdoor methods to simulate the behavior for your tests.
You might not be including the files so that Xamarin.UITest can access them. Files your app needs to run tests must either be included as an embedded resource or uploaded using the --data optional flag in the command line.
More information:
Backdoors guide: https://developer.xamarin.com/guides/testcloud/uitest/working-with/backdoors/
Generic backdoors sample: https://github.com/King-of-Spades/BackdoorExample/tree/master
Information on uploading data along with your app: https://forums.xamarin.com/discussion/comment/189113/#Comment_189113

Related

Grails compile .gsp to .html from gradle/command line

Is there any way that I can compile a gsp view into an html given a provided model from a groovy script without having to run all the Grails application?
The use case is that due to client demands we have to use Javascript/jQuery to create the front-end of the application. We've already had the architecture definition, but we're having issues creating integration front-end tests since our front-end composes of .gsp, javascript and css, all componentized.
For instance: Button may have a .gsp a .js and a .css associated to it.
Ideal solution to create front-end component integration tests: Have the .gsp compiled into html before the tests run so we can run the assertions in the *.test.js files. Since we don't need database, services or other instances to compile the .gsp, no need for the application to be running, avoiding the time to boot up the app.
Thanks in advance!
Next code should help you.
File gspFile = new File(BuildSettings.BASE_DIR, "grails-app/view/path/to/view/${viewName}.gsp")
This code will find gsp, you can find it by absoluteUrl
if(gspFile.exists()) {
def model = model((Class)scaffoldValue)
def viewGenerator = new GStringTemplateEngine()
Template t = viewGenerator.createTemplate(gspFile)
def contents = new FastStringWriter()
t.make(model.asMap()).writeTo(groovy.lang.Writer)
}
Writer you should find by yourself, think that it can be a stream to the file.
Didn't test it, but it should work :)

How to save PDF from HTML in Azure Functions

I'm developing an application which will have a web crawler for some sites.
The application will trigger a Azure Function by URL where the crawler will start the work.
So far, so good, but, we'll have to save some evidence that the crawler passed though the site. We're thinking of save a PDF file with the screen that the crawler passed, but, as Azure Functions doesn't have GDI+, it won't work with Selenium or PhantomJS.
One different approach can be download the HTML content and somehow save this HTML string (with all the JS and CSS dependency) into a PDF file.
i'd like of some library which can work with Azure Functions to make the screenshot of some URL (or HTML string) and save to PDF.
Thanks.
Unfortunately the App Service Sandbox whose rules Azure Functions live by is going to block most GDI+ API calls. We have had success with one third party library (ByteScout) for some PDF generation needs but I think in your case that type of operation is explicitly blocked. You can find out more details here https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#win32ksys-user32gdi32-restrictions
There is no workaround that I'm aware of because at the end of the day most of these solutions are relying on GDI+ in the underlying OS (directly or indirectly).
Your only real option is to offload that workload to virtual machine without the restriction on the API.That could take the form of a dedicated VM or something like an Azure Container Instance whose life-cycle you can manage more dynamically as needed. We do something similar today where we have a message queue being monitored on a VM and our azure function drops the request into the queue for processing.

Fine Uploader - using both Direct S3 and Traditional in same project

I have an existing FineUploader implementation for small files using the Traditional (upload-to-server) version which is working great. However, I'd like to also allow Direct S3 uploads from a different part of the application which deals with large attachments, without rewriting the existing code for small files.
Is there some way to allow both Direct S3 and Traditional uploads to work alongside each other? This is a single-page application, so I can't just load one or the other fine-uploader versions depending on which page I'm on.
I tried just including both fine-uploader JS files, but it seemed to break my existing code.
Client-side code:
$uploadContainer = this.$('.uploader')
$uploadButton = this.$('.upload-button')
$uploadContainer.fineUploader(
request:
endpoint: #uploadUrl
inputName: #inputName
params:
authenticity_token: $('meta[name="csrf-token"]').attr('content')
button: $uploadButton
).on 'complete', (event, id, fileName, response) =>
#get('controller').receiveUpload(response)
Good find, #Melinda.
Fine Uploader lives within a custom-named namespace so that it does not conflict with other potential global variables, this is the qq namespace (historically named). What is happening is that each custom build is redeclaring this namespace along with all member objects when you include it in the <script> tags on your page.
I've opened up an issue on our bug tracker that explains the issue in more technical details, and we're looking to prioritize a fix to the customize page so that in the future no one will have this issue.

Worklight Direct update

Does anybody know what if direct update updates everything that lives in the common directory structure. I used the same code base for multiple apps, the only change being certain settings within a js file that tells the application how to act. Is there a directory i can put that js file that would be safe from the direct update feature?
I cant seen to find any specific information on IBM's website.
I think you guys need to be careful which terms you are using in order to not confuse people who may be looking for similar help.
Environments are specific to the OS you are using. iOS, Blackberry, Android, and etc. environments.
Skins are based on the environment, and aren't generic to all platform. When you create a skin you must choose which environment you are running in.
So to correct some, direct updates will update all skin resources in targeted environments.
For example: You have an app with Android and iOS versions
When you create skins, you are creating essentially a responsive type of design to your parameters. For instance, if you have a 2.3 vs 4.2 Android OS, you can set a look and feel for both. However, these utilize a single web resource base. The APK would be the same for both versions of the app (by default) and have 2 available skins. On runtime utilizing IBM Worklight's 'Runtime Skinning' (hence the name) it goes through the parameter check for the OS and loads that skins overriding web code.
You could technically override all of the web code to be completely different for both skins, but that would be bulky and inefficient.
When you direct update you are updating all the resources of that particular environment (to include both skins), not the common folder/environment.
So an updated Android (both skins) would have updated web resources (if you deployed the android wlapp) and an iOS version would stay the same.
If you look at the Android project after build (native -> assets -> www -> default or skin) you can find the shared web resources generated by the common environment. However that is only put there every time you do a new build.
In the picture, I have an older version of the Android built for both skins on the left. On the right is a preview of the newer common resources after deploying only the common.wlapp. So you can see that they are separate.
Sorry if it was long winded, but I thought I would be thorough.
To answer the original question, have you thought of having all the parameters of the store loaded from user input or a setup? If you are trying to connect to 3 different store, create some form for settings control that will access different back ends or specific adapters. You could also create 3 different config.js that load depending on the parameters that you set so that you set. The other option is to set different versions of your apps specific to the store.
Example. Version 1.11, 1.12,1.13 can be 3 versions of the same app for store 1, 2, & 3. They can be modified and change and have 3 sets of web resources. When you need to update, jump up to version 1.21, 1.22,1.23. It seems a bit of a work around, but it may be your best bet at getting 3 versions of the same app to fall within the single application category. (keep 3 config.js types for modifying for the 3 stores).
To the best of my knowledge Direct Update will update every web resource of the skin you're using (html, css, js). However, I'm no expert with it.
If you're supporting only Android and iOS applications and need a way to store settings I recommend JSONStore. Otherwise look into Cordova Storage, Local Storage or IndexedDB.
Using a JSONStore collection called settings will allow you to store data on disk inside the app's directory. It will persist until you call one of the removal methods like destroy or until the application is uninstalled. There are also ways of linking collections to Worklight Adapters to pull/push data from/to a server. The links below will provide further details.
the only change being certain settings within a js
Create a collection for your settings:
var options = {};
options.onSuccess = function () {
//... what to do after init finished
};
options.onFailure = function () {
//... what to do if init fails
}
var settings = WL.JSONStore.initCollection('settings',
{background: 'string', itemsPerPage: 'number'}, options);
You can add new settings after initCollection onSuccess has been called:
settings.add({background: 'red', itemsPerPage: 20}, options);
You can find settings stored after initCollection onSuccess has been called:
settings.findAll({onSuccess: function (results) {
console.log(JSON.stringify(results));
}});
You can read more about JSONStore in the Getting Started Modules. See Modules: 7.9, 7.10, 7.11, 7.12. There is further information inside the API Documentation in IBM InfoCenter. The methods described above are: initCollection, add and findAll.
Since version 5.0.3 I think, direct update will not update all the webresources, only those of the skin you are using.
say you have skin def and skin skin2
you are on def
make change to def on the server -> you will get a direct update for
def only
make change to skin2 on server-> no direct update for you.
you are on skin2:
make change to skin2 on server -> direct update for skin2 only
make change to def javascript which also resides on skin2 ( and therefor end result is def+skin2 concatination), update only skin2
make change to def,just to a picture(also checking pic extension from application-descriptor: ") -> no direct update
Thats how direct update works.
Please also share some more details about what is the problem, I see you use a js file, where do you change it? what do you mean excatly, give a better (simplified) real life example, because it is unclear what you are trying.

Running a metro app headlessly

I've hit a bit of a roadblock, and I'm hoping someone can help!
I've written a metro application that serves as a unit test runner, and I now need to be able to call this application headlessly so that it can be used for validation in the build process. The way the metro app works is it runs a bunch of unit tests, generates an XML file that contains the test results, and displays the results to the user.
Ideally, I would have a simple script that would run the metro app, execute the tests, exit the app, and then have the ability to read the results in the generated XML file. Is this possible, and if so, what's the best way to do it?
Here are some more specific questions:
How can one start a metro app headlessly, and in the metro app is there a way to detect this so that it does not wait for user input?
Is it possible to access files within the package of a metro app from an outside process?
EDIT - A workaround would be to create a custom Visual Studio test runner and then find a way to run the tests automatically with each build. I know this can be done within the IDE, but I'm not sure if there's a way to do this with a script.
I imagine you've long since moved past this problem, but for the sake of anyone else looking to do this, I got it to work without too much hassle. To execute a Metro app in an automated/headless fashion, I wrote a simple desktop command-line utility that takes the name of a metro app and makes use of the IApplicationActivationManager interface to launch it. I can then call that utility from a script.
The second argument to that inteface's ActivateApplication method is a string that gets passed in to the activated app, kind of like command-line arguments. It shows up as the Arguments property of the LaunchActivatedEventArgs that is received by the app's OnLaunched handler. The default implementation of OnLaunched in the Visual Studio template projects passes this value to the MainPage when it first navigates to it, where it comes through into the OnNavigatedTo handler as the Parameter property of the NavigationEventArgs. You could catch it in whichever place is more convenient.
My launcher utility passes a hard-coded flag through there, as well as forwarding its own command-line arguments. That allows the top-level script to pass arbitrary data down into the Metro app. The app can use that data to realize that it's running headless and run its tests. It can spit out whatever kind of result data you like into one of its folders (like its LocalFolder), which a desktop app can then read from %LOCALAPPDATA%\Packages\APPNAME\LocalState. I setup my launcher utility to wait for the result files to appear after launching the app, and then use them to determine its own exit code. The launcher utility can't kill the app afterward, but the app can kill itself when it's done via CoreApplication.Exit.
That setup worked great for a while, but a problem that I'm running into now is that the app isn't always launched to the foreground, and the runtime will suspend/terminate the app after it hasn't been the foreground app for some amount of time (currently ~10-15 seconds). So any tests that take too long won't work with this approach, barring some workaround that I haven't discovered yet (which I was searching for when I came across this question).
I doubt you'll be able to do it.
It's the same sort of problem as trying to run a WPF app headlessly, but harder since you'd also have to deal with the Metro sandbox security model.
P.S. Happy to be proven wrong!
No, sorry. You hit a wall with your first requirement of a script that runs the Metro application in "headless" mode in the first place. Your second requirement would be your next wall. One application cannot see, let alone monitor, another application/thread/process. Then your third requirement is also impossible. Files inside an application are isolated. It sounds to me like you found a good candidate for a desktop app. Having said that, don't mistakenly think that you can't have a companion Metro application that is your dashboard. It's just the execution core can't be hosted inside the WinRT sandbox.