I have 1 URLLoader to log the user in, then after the user has logged in successfully, I have another URLLoader which once the user is logged in, it queries back relevant information..
This works perfectly fine on my Mac OSX with MAMP running (when I'm running it as an AIR project), and on CentOS.. But when I try to test it with Windows (WAMP), it doesn't seem to remember the session of the URL-Loader... Any ideas?
What do you mean by "share session"? URLLoader only helps you create requests to the server-side
The URLLoader class downloads data from a URL as text, binary data, or URL-encoded variables. It is useful for downloading text files, XML, or other information to be used in a dynamic, data-driven application.
If you have problem with session in your AIR application, It means you have problem with inner state in your app or on server-side. From information you provided, you should find problems in your session realisation on server-side because of moving logic on WAMP setup. For example, web server doesn't store session etc.
Related
I've built a Microsoft Teams channel tab with SSO and I'm hosting the tab application which I've built with React via create-react-app.
The auth works well, and the app loads and runs.
But when I update my app on the web site, the Teams desktop client (Mac and PC) will sometimes cache the old app and will not pick up the changes. But then sometimes it will.
If I run the web client, it usually picks up the changes.
I've verified that I'm serving up new bundles with different names each time I update. But running the Teams desktop devtools I can see that Teams is asking for the old bundle, every time, so it's definitely caching the response from my app's URL.
I've read about the problems people have with the Teams desktop client has with caching Sharepoint content and not picking up content changes. I've tried the cache clearing techniques but they don't seem to work for this issue. And I can't reasonably have users do crazy cache clearing every time I make an update to the tab app.
What should I do? Some have suggested I need to update my version in the app manifest and redeploy to Teams -- that seems really brutal. Do I need to set some cache headers in a certain way to force the Teams client to pick up the new code?
Solution
Set a Cache-Control response header to no-cache (or must-revalidate) for your build/index.html.
Explanation
We had the exact same issue. Turns out it was because we cached our build/index.html.
According to the create-react-app doc, only the content of build/static/ can safely be cached, meaning build/index.html shouldn't be cached.
Why? Because files in build/static/ have a uniquely hashed name and are therefore cache busted on deployment. index.html is not.
What's happening is since Teams uses your old index.html, it tries to load the old /static/js/main.[hash].js defined in it, instead of your new JS bundle.
It works properly in the Teams web client because most browsers send a Cache-Control: max-age=0 request header when requesting your index.html, ignoring any cache set for the file. Teams desktop doesn't as of today.
This seems like an issue with the way your app is managing the default browser caching logic. Are service workers enabled for your app? What cache control headers is your web server returning?
There are some great articles that describe all the cache controls available to you; for example:
https://medium.com/#codebyamir/a-web-developers-guide-to-browser-caching-cc41f3b73e7c
Have you tried doing something like this to prevent caching of your page (do note that long term you might want to use something like ETags which is a more performant option):
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#preventing_caching
P.S. You can also follow the instructions here to open the dev tools in the Desktop Client to debug all this:
(How) can I open the dev tools in the Microsoft Teams desktop client?
And even force clear any cached data/resources for your app:
A bit unsure where to look for this one...
Context:
HTML5 web page, that uses HTML5 EventSource / server-side events to get refresh notifications
OpenWrt BarrierBreaker server, running uHTTPd as the web server
a two-level CGI script that provides the server-side events:
the CGI is a shell script (ash, not bash), that parses QUERY_STRING, and calls...
a C application that do the true data extraction (from an SQLite database) and pushes the data to the web page
Everything works, except for a little detail: when the web page is closed,
the C application keeps running. Since it doesn't expect any user input, its current structure is a simple while(1). So after some time, the openwrt box has dozens of copies of the app running.
So the question: how can the application be changed to detect that the client isn't there anymore, and that it should quits?
Thanks
[Edit]
Since posting this a few hours ago, i investigated if the information was somehow available in the script's input stream. It appears it isn't.
I also found http://html5doctor.com/server-sent-events/ that describes a strategy to do exactly this in a Node.js environment, but I have no idea how to translate this in a script-based one.
[/Edit]
I'm writing a suite of applications that all require login to a server. It's come together quite nicely, but I've run into a logistic snag. The nature of the applications require that they be closed and launched again later with some frequency. It is very annoying to have to login every time one of the applications needs to launch.
I'm trying to think of a secure way of perhaps having the login information stored on the local user's machine. Is there a good way to even go about that? Permissions protected config files? The registry? How does Firefox store its passwords? Have you ever had to do something like this?
The suite is more of a protocol than anything, all the applications are written in a variety of languages (Python, C#, Java, etc) and run on a variety of operating systems (Windows, Linux, OSX, etc). I'm not really looking for code examples, but more just general approaches to this problem. Is it wise to have locally stored passwords? How can you have a session login for a suite with such disparate components? Right now I use application.rc config files stored locally to each application, but they are plain text and far from secure.
I'm going with Jeff on this one and assuming that since you mention the registry, you're referring to Windows. I'm also going to assume that you're talking about a desktop application (otherwise you could just use the builtin browser cookies to store the user's session).
Off the top of my head, I'd engineer the application so that when the user logs in to the server, the server returns a unique session id that identifies the authenticated user. I would then store than id along with an salted/encryped timestamp (which gives you the option of expiring the cached credentials).
The storage mechanism is up to you. You could store them in the HKEY_LOCAL_USERS section of the windows registry, or the Application Data folder in Windows. Both give you the option of user segmented storage.
Typically, this sort of thing is done by use of a "cookie"; a key which (securely) indicates that the user has successfully previously logged in to the server resource. This is how most web sites manage login information, and Firefox (all browsers, really) store the cookies that the browsers set on the user login. A few important things about cookies: they should be encrypted, to assure that malicious programs cannot generate one and thereby bypass the login process, they should match to server-kept resources (same reason), and they should age out, so that while you can maintain login information on a site for a while, your login information is not permanent (which is another security hole).
Personally I would use an encrypted local config file with some sort of an ID value of the machine (motherboard ID, Chip ID, HD ID etc) as part of the encryption key so that the config file cant be just copied from one machine to another. I would also include the date and time so you can expire it when you decide it gets stale.
Alternatively, you can create a host exe or launcher that does the log in and then goes to sleep and wake it up each time you want to launch a new application. The host exe would take the application as a parameter and decide whether or not to ask for login credentials (usually when the first app is started and then keep the login user and an encrypted password in memory. When the host exe has exited the login info is forgotten and when you start up again the cycle starts over.)
Tomcat 6 supports persistence/replication of sessions, so you should care about choosing the manager and configure it ;-)
More info: http://tomcat.apache.org/tomcat-6.0-doc/config/manager.html
We have an Adobe AIR application which could be possibly downloaded from multiple domains. And when it's run, it should connect back to the site it was downloaded from to get data to show to the user.
So far we have a separate application build for each domain with a site URL hardcoded into it. And I wonder is there a way for AIR application to find out at runtime the URL (or at least domain) from which it was downloaded?
What we would like to have is a single downloadable binary served from all different domains, which still can know it's origin URL.
There's no function to retrieve such information, it would just make no sense if you think about it.
The most stable way is to include an external configuration file into the package.
Note that you can use ANT to automate this process for this final deployment.
There's no direct way to do it.
Here are some options which come in mind:
Build different versions for each site (this could be automated)
Let user choose the site at first launch
Try to guess it using using whatever resources you have (timezone, language, etc)
How should this work? The only solution i see (independent from AIR) is that you deliver an extra (properties) file with the application, containing the URL downloaded from. So you dont need to build a separate app for each domain, but only package a different domain-file with it. The app then reads this file and executes some context sensitive stuff.
I am trying to address the exact same issue right now.
It looks like you can modify the install badge to pass parameters to the air app.
From what I gather the values are only passed down on install or launch-from-badge.
Something I plan on researching is that one of the parameters in "AIRBadge.as" is _appURL which is the URL of the page the badge is on. I don't yet know if that value makes it down to the installed AIR app in some way; but it could be a useful property. I'm ultimately hoping that the AIR install process injects that into the application descriptor xml, but I'm not holding my breath.
Check this page out: http://archive.davidtucker.net/2008/01/10/air-tip-5-passing-arguments-to-an-application-on-install/#
When the user downloads, you could store their IP address in your central DB. Then when the app is installed and runs the first time, the app could hit your central DB to match up their IP address with the server they downloaded from.
A cookie with a specific name being stored on a download page, and the AIR app looking for that? Though that might not work for direct downloads. It might also be hard to pull off since knowing the specific browser used to download it would be an issue.
Typically I develop my websites on trunk, then merge changes to a testing branch where they are put on a 'beta' website, and then finally they are merged onto a live branch and put onto the live website.
With a Facebook application things are a bit tricky. As you can't view a Facebook application through a normal web browser (it has to go through the Facebook servers) you can't easily give each developer their own version of the website to work with and test.
I have not come across anything about the best way to develop and test a Facebook application while continuing to have a stable live website that users can use. My question is this, what is the best practice for organising the development and testing of a Facebook application?
Try updating your hosts file (for windows users # c:\windows\System32\Drivers\etc\hosts) with an entry that will route all requests from your live domain back to your machine.
So 127.0.0.1 mywebappthatusesfacebook.com.
Then make sure that your app is running at the root of your webserver. # http://localhost/ Then goto mywebappthatusesfacebook.com in your browser and it should redirect right back to your local machine. Facebook won't know the difference. Hope this helps
The way I and my partner did it was we each made our own private Facebook applications, that pointed to our IP address where we worked on it. Since we worked in the same place, we each picked a different port, and had our router forward that port to our local IP address. It was kinda slow to refresh a page, but it worked very nicely.
You'll have to add both trunk and test versions as different applications and test them using test accounts. You may also use a single application and switch its target URL between cycles.
Testing FB apps is still a rather primitive process.
I generally setup a test application that is a complete copy of the production settings inside the FB development environment that uses an SSH tunnel to point to my development server. You can setup as many applications as you need inside FB - I generally have a development application, a staging app and production. Staging and Production are both on "live" servers rather than an SSH tunnel.
In your application you then use whatever language/framework/server tools are at your disposal to switch the FB configuration based on the server. In Rails, the Facebooker gem actually has built in support for different FB configurations.
Once all of that is done, testing is, unfortunately, still a matter of running the app within FB itself. I use Selenium to automate as much of this as possible.
Best way to do this:
Remove 'App Domain' from 'Basic Info'
Set website's 'Site URL' to : "http://localhost/" .
That simple.
(This only apply if you don't have a live system running in parallel to the test env. In that case get yourself another key.)
We have it setup much like Toby. A series of config files for each developer, that has the Facebook APP Id info (a different app for each developer), separate pages where the app is hosted, and git ignores the config files. We're LAMP with Code Igniter, and it's similar to Rails in that we can set the environment in 1 file, which points to the config with the Facebook constants.
Branching out into Selenium, using unit tests for model-testing.
For local testing we simply use a different app than for the server. In our case the Canvas-URL is set to localhost.local:8000.
You only have to make sure that when you use facebook connect that you type in localhost.local into the address field of the browser and not just localhost.
For testing a canvas or tab app it is faster if you use the 'open iframe in new tab' command of Firefox. This way the session and cookies from Facebook are preserved.
Another solution is NGROK
https://ngrok.com/
It opens a public tunnel to your local app
Example on my rails application by simply typing
./ngrok 3000
I get
http://630066fe.ngrok.com -> 127.0.0.1:3000