Titanium - save remote image to filesystem - titanium

I'm building an app with titanium and I would like to save in the phone, the user's profile picture. In my login function, after the API response, I tried to do :
Ti.App.Properties.setString("user_picture_name", res.profil_picture);
var image_to_save = Ti.UI.createImageView({image:img_url}).toImage();
var picture = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, res.profil_picture); //As name, the same as the one in DB
picture.write(image_to_save);
And in the view in which I want to display the image :
var f = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory,Ti.App.Properties.getString("user_picture_name") );
var image = Ti.UI.createImageView({
image:f.read(),
width:200,
height:100,
top:20
});
main_container.add(image);
But the image doesn't appears. Could someone help me ?
Thanks a lot :)

There are 2 issues with your code:
1 - You cannot use toImage() method unless your image view is rendered on UI stack or simply on display. Rather you should use toBlob() method.
2 - Point no. 1 will also not work the way you are using because you cannot directly use toBlob() method until or unless the image from the url is completely loaded, means until it's shown on image view. To check when the image is loaded, use Ti.UI.ImageView onload event
But, there's another better approach to do such type of tasks.
Since you have the image url from your Login API response, you can use this url to fetch image from http client call like this:
function fetchImage() {
var xhr = Ti.Network.createHTTPClient({
onerror : function() {
alert('Error fetching profile image');
},
onload : function() {
// this.responseData holds the binary data fetched from url
var image_to_save = this.responseData;
//As name, the same as the one in DB
var picture = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, res.profil_picture);
picture.write(image_to_save);
Ti.App.Properties.setString("user_picture_name", res.profil_picture);
image_to_save = null;
}
});
xhr.open("GET", img_url);
xhr.send();
}

You don't need to manually cache remote images, because
Remote images are cached automatically on the iOS platform and, since
Release 3.1.0, on the Android platform.
[see docs here & credit to Fokke Zandbergen]
Just use the remote image url in your UI, at first access Titanium will download and cache it for you; next accesses to the same image url will actually be on the automatically cached version on local device (no code is best code)
Hth.

Related

VueJS and Image CDNs: How to check when image is done being optimized through the Image CDN?

I might be over thinking this, but I got a an avatarprofile component that lets a user update their avatar image. I am using Google Cloud storage as the image hosting and then I am using an image CDN (imagekit) to handle the image optimization/caching. All works good...however, I do have a minor annoyance that I was wondering if someone could help me with:
First, here's my code (will help explain what I am having issue with):
async avatarChangeHandler(e) {
this.overlayShow = true //<-- show a loading indicator
try {
if (e.target.files.length) {
this.image = await new Promise((resolve) => {
const reader = new FileReader()
reader.onload = (e) => {
resolve(e.target.result)
}
reader.readAsDataURL(e.target.files[0])
})
await this.photoUpload() //<-- upload image to Google storage and return new image URL
await this.updateAvatar() //<-- update user's profile in the database with new URL
await this.$store.dispatch('getUserProfile', this.currentUser) //<-- grab updated profile
console.log('updating...done')
this.overlayShow = false //<-- terminate the loading indicator
The problem is that the user profile is updated with the new image URL and fetched faster than the image CDN...and so that means the overlayShow is already set to false and still shows the old image for a second or so (depending on level of optimization needed).
What can I do to make sure the overlayShow is not set to false until the image CDN is done with the new image? Thanks and I realize this may not be feasible, but looking for advice or suggestions on approach. Thanks!

Show local image file:///tmp/someimage.jpg

Scenario
I'm contributing for a OSS project that is build on BlazorServerSide and ElectronNET.API Version 9.31.1.
In an Electron window we would like to show images from local storage UI via <img> tag.
What I have tried:
I have tried with:
<img src="file:///home/dani/pictures/someimage.jpg" />
But doesn't work. Image doesn't appear. I have then tried to create electron window with WebSecurity = false, but also doesn't help (images appears as broken on UI):
var browserWindowOptions = new BrowserWindowOptions
{
WebPreferences = new WebPreferences
{
WebSecurity = false,
},
};
Task.Run(async () => await Electron.WindowManager.CreateWindowAsync(
browserWindowOptions,
$"http://localhost:{BridgeSettings.WebPort}/Language/SetCultureByConfig"
));
Finally, as workaround, I'm sending the images as data base64 in img src's attribute, but it looks like a dirty approach.
My Question:
My question is, how can I show on electron window picture files from local storage.
Some irrelevant info:
The open source line where I need assistance.
There are several ways to go about this, so I will try to cover the most relevant use cases. Some of this depends on the context of your project.
Access to local files behave as cross origin requests by default. You could try using the crossorigin=anonymous attribute on your image tag, but doesn't work because your local file system will not be responding with cross origin headers.
Disabling the webSecurity option is a workaround, but is not recommended for security reasons, and will not usually work correctly anyway if your html is not also loaded from the local file system.
Disabling webSecurity will disable the same-origin policy and set allowRunningInsecureContent property to true. In other words, it allows the execution of insecure code from different domains.
https://www.electronjs.org/docs/tutorial/security#5-do-not-disable-websecurity
Here are some methods of working around this issue:
1 - Use the HTML5 File API to load local file resources and provide the ArrayBuffer to ImageData to write the image to a <canvas> .
function loadAsUrl(theFile) {
var reader = new FileReader();
var putCanvas = function(canvas_id) {
return function(loadedEvent) {
var buffer = new Uint8ClampedArray(loadedEvent.target.result);
document.getElementById(canvas_id)
.getContext('2d')
.putImageData(new ImageData(buffer, width, height), 0, 0);
}
}
reader.onload = putCanvas("canvas_id");
reader.readAsArrayBuffer(theFile);
}
1.b - It is also possible to load a file as a data URL. A data URL can be set as source (src) on img elements with JavaScript. Here is a JavaScript function named loadAsUrl() that shows how to load a file as a data URL using the HTML5 file API:
function loadAsUrl(theFile) {
var reader = new FileReader();
reader.onload = function(loadedEvent) {
var image = document.getElementById("theImage");
image.setAttribute("src", loadedEvent.target.result);
}
reader.readAsDataURL(theFile);
}
2 - Use the Node API fs to read the file, and convert it into a base64 encoded data url to embed in the image tag.
Hack - Alternatively you can try loading the image in a BrowserView or <webview>. The former overlays the content of your BrowserWindow while the latter is embedded into the content.
// In the main process.
const { BrowserView, BrowserWindow } = require('electron')
const win = new BrowserWindow({ width: 800, height: 600 })
const view = new BrowserView()
win.setBrowserView(view)
view.setBounds({ x: 0, y: 0, width: 300, height: 300 })
view.webContents.loadURL('file:///home/dani/pictures/someimage.jpg')

How to precompile processingjs sketch to speed load times?

The loading times of my processingjs webpage are getting pretty hairy. How can I precache the compilation of processing to javascript?
It would be acceptable for my application to compile on first entering the webpage (maybe keeping the result in the local store?) and then reuse the compilation on subsequent loads.
There's two ways to drive down load time as experienced by the user. The first is using precompiled sketches, which is relatively easy: github repo, or even just download the master branch using github's download button (https://github.com/processing-js/processing-js), and then look for the "./tools/processing-helper.html" file. This is a helper page that lets you run or compile sketches to the JavaScript source that Processing.js uses. You will still need to run this alongside Processing, since it ties into the API provided, but you can use the "API only" version for that. Take the code it generates, prepend "var mySketch = ", and then do this on your page:
<script src="processing.api.js"></script>
<script>
function whenImGoodAndReady() {
var mySketch = (function.....) // generated by Processing.js helper
var myCanvas = document.getElementById('mycanvas');
new Processing(myCanvas, mySketch);
}
</script>
Just make sure to call the load function when, as the name implies, you're ready to do so =)
The other is to do late-loading, if you have any sketches that are initially off-screen.
There's a "lazy loading" extension in the full download for Processing.js - you can include that on your page, and it will make sketches load only once they're in view. That way you don't bog down the entire page load.
Alternatively, you can write a background loader that does the same thing as the lazy loading extension: turn off Processing.init, and instead gather all the script/canvas elements that represent Processing sketches, then loading them on a timeout using something like
var sketchList = [];
function findSketches() {
/* find all script/canvas elements */
for(every of these elements) {
sketchList.append({
canvas: <a canvas element>,
sourceCode: <the sketch code>
});
}
// kickstart slowloading
slowLoad();
}
function slowLoad() {
if(sketchList.length>0) {
var sketchData = sketchList.splice(0,1);
try {
new Processing(sketchData.canvas, sketchData.sourceCode);
setTimeout(slowLoad, 15000); // load next sketch in 15 seconds
} catch (e) { console.log(e); }
}
}
This will keep slow-loading your sketches until it's run out.

How to create a functioning small thumbnail with small play button with Spotify Apps API?

somewhat of a javascript novice here.
I'm trying to create this: http://i.imgur.com/LXFzy.png from the Spotify UI Guidelines.
Basically a 64x64 album cover with an appropriate sized play button.
This is what I have so far:
function DataSource(playlist) {
this.count = function() {
return playlist.length;
}
// make node with cover, trackname, artistname
this.makeNode = function(track_num) {
var t = playlist.data.getTrack(track_num);
// console.log(t);
var li = new dom.Element('li');
//generate cover image with play/pause button
var track = m.Track.fromURI(t.uri, function(a) {
var trackPlayer = new v.Player();
trackPlayer.track;
trackPlayer.context = a;
dom.inject(trackPlayer.node, li, 'top')
});
//track name
var trackName = new dom.Element('p', {
className: 'track',
text: t.name
});
//artist name
var artistName = new dom.Element('p', {
className: 'artist',
text: t.artists[0].name
});
dom.adopt(li, trackName, artistName);
return li;
}
}
This datasource function feeds into a pager function later in the code. This code generates image, artist name and track name just fine except I can't seem to get the image to be 64x64 without overriding with my own css. I'm sure there is a way to set this in javascript since the core Spotify CSS files include a class for it however I'm at a loss at how to do it.
Also the play button renders but gives an error in the console that the track has no method 'get' when I click on it. How am I suppose to know it needs a get? Is there some way I can see this player function so I know what I'm doing wrong with it?
Any help would be greatly appreciated, I'm sure it'll help droves of people too as there is no documentation anywhere I can find on how to do this.
Check the code here: https://github.com/ptrwtts/kitchensink/blob/master/js/player.js
The kitchensink app displays a lot of the Spotify Apps API functionality
For the playback button, I know that it doesn't seem to actually work for single tracks used as the context. It really only works if you use either an Artist, Album, or Playlist context. Not sure why that is.

How do I get data from a background page to the content script in google chrome extensions

I've been trying to send data from my background page to a content script in my chrome extension. i can't seem to get it to work. I've read a few posts online but they're not really clear and seem quite high level. I've got managed to get the oauth working using the Oauth contacts example on the Chrome samples. The authentication works, i can get the data and display it in an html page by opening a new tab.
I want to send this data to a content script.
i'm having a lot of trouble with this and would really appreciate if someone could outline the explicit steps you need to follow to send data from a bg page to a content script or even better some code. Any takers?
the code for my background page is below (i've excluded the oauth paramaeters and other )
` function onContacts(text, xhr) {
contacts = [];
var data = JSON.parse(text);
var realdata = data.contacts;
for (var i = 0, person; person = realdata.person[i]; i++) {
var contact = {
'name' : person['name'],
'emails' : person['email']
};
contacts.push(contact); //this array "contacts" is read by the
contacts.html page when opened in a new tab
}
chrome.tabs.create({ 'url' : 'contacts.html'}); sending data to new tab
//chrome.tabs.executeScript(null,{file: "contentscript.js"});
may be this may work?
};
function getContacts() {
oauth.authorize(function() {
console.log("on authorize");
setIcon();
var url = "http://mydataurl/";
oauth.sendSignedRequest(url, onContacts);
});
};
chrome.browserAction.onClicked.addListener(getContacts);`
As i'm not quite sure how to get the data into the content script i wont bother posting the multiple versions of my failed content scripts. if I could just get a sample on how to request the "contacts" array from my content script, and how to send the data from the bg page, that would be great!
You have two options getting the data into the content script:
Using Tab API:
http://code.google.com/chrome/extensions/tabs.html#method-executeScript
Using Messaging:
http://code.google.com/chrome/extensions/messaging.html
Using Tab API
I usually use this approach when my extension will just be used once in a while, for example, setting the image as my desktop wallpaper. People don't set a wallpaper every second, or every minute. They usually do it once a week or even day. So I just inject a content script to that page. It is pretty easy to do so, you can either do it by file or code as explained in the documentation:
chrome.tabs.executeScript(tab.id, {file: 'inject_this.js'}, function() {
console.log('Successfully injected script into the page');
});
Using Messaging
If you are constantly need information from your websites, it would be better to use messaging. There are two types of messaging, Long-lived and Single-requests. Your content script (that you define in the manifest) can listen for extension requests:
chrome.extension.onRequest.addListener(function(request, sender, sendResponse) {
if (request.method == 'ping')
sendResponse({ data: 'pong' });
else
sendResponse({});
});
And your background page could send a message to that content script through messaging. As shown below, it will get the currently selected tab and send a request to that page.
chrome.tabs.getSelected(null, function(tab) {
chrome.tabs.sendRequest(tab.id, {method: 'ping'}, function(response) {
console.log(response.data);
});
});
Depends on your extension which method to use. I have used both. For an extension that will be used like every second, every time, I use Messaging (Long-Lived). For an extension that will not be used every time, then you don't need the content script in every single page, you can just use the Tab API executeScript because it will just inject a content script whenever you need to.
Hope that helps! Do a search on Stackoverflow, there are many answers to content scripts and background pages.
To follow on Mohamed's point.
If you want to pass data from the background script to the content script at initialisation, you can generate another simple script that contains only JSON and execute it beforehand.
Is that what you are looking for?
Otherwise, you will need to use the message passing interface
In the background page:
// Subscribe to onVisited event, so that injectSite() is called once at every pageload.
chrome.history.onVisited.addListener(injectSite);
function injectSite(data) {
// get custom configuration for this URL in the background page.
var site_conf = getSiteConfiguration(data.url);
if (site_conf)
{
chrome.tabs.executeScript({ code: 'PARAMS = ' + JSON.stringify(site_conf) + ';' });
chrome.tabs.executeScript({ file: 'site_injection.js' });
}
}
In the content script page (site_injection.js)
// read config directly from background
console.log(PARAM.whatever);
I thought I'd update this answer for current and future readers.
According to the Chrome API, chrome.extension.onRequest is "[d]eprecated since Chrome 33. Please use runtime.onMessage."
See this tutorial from the Chrome API for code examples on the messaging API.
Also, there are similar (newer) SO posts, such as this one, which are more relevant for the time being.