I've just created an eclipse target definition/platform for my application, opting to use software sites (rather than local files/installations) as recommended in the tutorial I followed and a later best practices post by the same author.
The software sites are all external sites (eclipse, sourceforge etc.)
Everything seems to be working well, though I have two concerns:
If a component is updated (by the software provider), will it also be updated automatically in the target definition file?
Is it possible to take a backup of the target platform, so that it can be configured (for example) on a computer without an internet connection, or used in the event a remote site becomes unavailable.
You can create a mirror of an Eclipse p2 repository. It's quite common to do this inside an organisation so that there's a copy of the repository that's quick to access, and isn't dependant on some third party continuing to host it. There's a guide on the Eclipse Wiki.
As far as I'm aware, your Target Definition can only reflect what's in the p2 repository it's pointing at. If the developer replaces a package with a newer version, it'll pick that up. If you need greater control over that, then selectively mirroring the content is probably the way to go.
From that wiki page, it looks like by default it won't delete content in your mirror (even if it's deleted in the remote) unless you specify -writeMode clean.
Related
Why is it a good idea to limit deployment of files to the user-profile or HKCU from my MSI or setup file?
Deployment is a crucial part of most development. Please give this content a chance. It is my firm belief that software quality can be dramatically improved by small changes in application design to make deployment more logical and more reliable - that is what this "answer" is all about - software development.
This is a Q/A-style question split from an answer that became too long: How do I avoid common design flaws in my WiX / MSI deployment solution?.
As stated above this section was split from an existing answer with broader scope: How do I avoid common design flaws in my WiX / MSI deployment solution? (an answer intended to help developers make better deployment decisions).
9. Overuse of per-user file and registry deployment.
Some applications won't run correctly for all users on a machine, because the user-specific data added during installation isn't correctly added to other user's profiles and registry. In other words the application just works for the user who installed the software. This is obviously a serious design error.
There are several ways to "fix" this, but the whole issue of deployment of per-user files and settings is somewhat messy for a few fundamental reasons:
How do you reference count components installed multiple times? (for each user on the machine)
What do you do with the installed data and settings on uninstall?
How do you deal with new files and settings to install that differ from the ones that are on disk and in the registry and have user-made changes? Surely you don't overwrite automatically?
There are no real clear cut answers, but there are several alternative ways to deal with the "problems". My preferred options are 2 & 3 since I don't think Windows installer should deploy, track or attempt to modify or worse yet, uninstall user data and settings at all - it is user data that shouldn't be meddled with:
9.1 Using Windows Installer Self-Repair or similar
The first option is to get settings and files and HKCU registry keys deployed properly via the setup itself or setup-like features. There are two major ways to do this: relying on Windows Installer "self-repair" generally triggered by an advertised shortcut, or using Microsoft Active Setup.
Self-repair is what happens when you launch a shortcut to start your application, and Windows Installer kicks in and you see a progress bar whilst "something" is being installed. What is typically added are HKCU registry entries and user-profile files.
There is also another alternative to achieve this, it is called Active Setup and is also a Microsoft feature. It essentially registers "something runnable" to run once per user on logon. This can be used to set up per-user data. Active Setup allows "anything runnable" to be executed - for example a copy of files to the user-profile. .
Both of these options mean that the user data and settings are copied in place once - and from then on they are not generally touched, but in the case of "self-repair" might get uninstalled for any user who actually runs the uninstall of the application (unless the setup is designed not to do so).
Although setting up user data with self-repair and Active Setup are "established" methods to get applications running properly, it seems wrong to track user data with Windows Installer components. Why? Because it is really user data that shouldn't be meddled with once initialized.
Accordingly my honest take on the whole issue is to try to avoid deploying user specific data or registry keys and values altogether, and this is what is described next as two other user-data deployment methods.
9.2 Application Initialization of User Data
The second alternative, and one that I find much cleaner, is to change your application executable to be able to initialize all per-user settings and files based on default setting and templates copied from a per-machine location or based on application internal defaults (from the source code) instead of writing them via your setup.
In this scenario Windows Installer will not track the files or settings that are copied to each user. It is treated as user data that should not be interfered with at all. This avoids all interference such as reset or overwritten user data during upgrades and self-repair (and manual uninstall and reinstall).
If there are cases where "fixes" must be made to application settings, this can be achieved by having the application executable update the settings for each users on launch, and then tag the registry that the update has been completed.
The overall "conclusion" is that your setup should prepare your application for first launch, it should not set up the user data and settings environment. All user-profile files and HKCU settings should be defaulted by the application in case they are missing on launch - this yields a much more robust application that is easier to test for QA personnel as well. This is particularly important for Terminal Servers where self-repair is not allowed to run at all. In such cases the application data will be missing if you rely on self-repair to put user data in place.
9.3 "Cloud" or Database Storage of User Settings
To take things a step further in today's "cloud environment" - and this is in my opinion the preferred option. Why should your application be restricted to files and registry keys and values? Why not store all user specific settings in the solution's database?
Full access, control and persistence for all settings without any deployment issues at all.
You do get new management issues though, and they must be shared between developers, system administrators and database administrators. But isn't the cloud pretty much the industry standard by now?
We have been struggling long enough with roaming profiles, corrupted user registry, mishandled user-profile data files, etc.... Developers, save yourself a lot of trouble, and create yourself some new database management issues instead of deployment issues - and start yelling at a whole new bunch of people! :-).
Settings in databases are:
Not suffering from "dual source problems". There is one instance, and it is updated in real time. Not like the synchronization problems seen with user-profile and "roaming".
Inspectable, manageable and patchable
Revisable (version control - can revert older settings)
You could even "tweak" all the user settings from your setup still by running database scripts as part of deployment, but if you are in a corporate environment - isn't the thought of just raising a ticket and then have your database administrator run the maintenance scripts with proper transaction support and rollback much more appealing?
Even if you are delivering a large, fat-client vendor application for general distribution and third party use (in other words not a tailored, corporate client/server solution where you are guaranteed to have a back end database), one should consider cloud storage of user settings by having users log on to a cloud using their email or similar and then synchronize settings in real time.
Such large applications generally always need to "cache" some settings files on the computer and in HKCU, but it seems more and more possible to save all settings in a single temporary file in the user profile area which is entirely "sacrificial" and even possible to delete if it is corrupted and then download the last saved settings.
Instead of hosting the cloud yourself, it is obviously possible to use company DBOs to configure their own company-wide cloud where they have full control of all settings, and can also enforce mandatory policies and restrictions for your software's operation. Not to mention the proper backup that is possible for all user settings.
I have created an application that exposes a OSX service for certain file by adding an NSService entry into my applications info.plist (as in http://www.macosxautomation.com/services/learn/), but I find that upon installing my application on a new machine the service doesn't show up quickly in the finder right click context menu.
I know that this is because pasteboard services hasn't re-indexed the /Applications folder and "discovered" the newly installed service.
I also know that I can force a re-index and discovery by manually running /System/Library/CoreServices/pbs.
The question here is what is the best way to ensure that my service shows up as quickly as possible for users who are installing my application for the first time.
I could execute a system call to "/System/Library/CoreServices/pbs" when my application starts up --If the user immediately starts my application--, but that only partly solves the problem (in addition I wonder if there is a better Cocoa API based way of doing this).
If my application is generally only accessed via the context menu, a user will never think to go out and start the application in the first place. They will only think it is broken when the context menu isn't there.
I am not distributing my application with an installer. I am simply providing a bundle that can be dragged and dropped into /Applications (as I believe Apple usually suggests).
Is there a way to expedite the process of service discovery when doing an installation in this fashion, so that there isn't any period of time where the user is without the newly installed service?
As a side note, it appears that the problem may not exist in 10.8 (or at least be as pronounced). Apple may have made this indexing happen more quickly in their most recent release.
I've actually ended up using
system("killall pbs;/System/Library/CoreServices/pbs -flush");
in one of my apps, just as you describe, though it's a long time ago, when 10.5 was in question as well.
You might want to try this function, however:
void NSUpdateDynamicServices(void)
which according to the documentation acts just like flushing pbs, but is a cleaner solution.
Also, if (according to your description), the app is nothing but a service, consider making it a really just a service - see (Installing the Service)
To build a standalone service, use the extension .service and store it in Library/Services.
Apache Ace documentation refers about RepositoryTool.jar that can be used to manage Repository. But I could not find this tool in the Apache ACE distribution. Where can I download this tool?
The page you're referring to is part of the old site (the new one is located at http://ace.apache.org), and refers to tooling you probably shouldn't be using anymore: it has been used before there were other ways to interact with the repository, mainly for development purposes.
Depending on your needs, you can use the repository in a number of ways,
If you need to programmatically read and write the repositories (remember that they're only XML), use the HTTP API available for that.
You can do the same thing from code, see Repository and its implementations.
If you want to edit 'meaningful' ACE data (such as linking distributions and targets), use the Client REST API. This is probably the option you want.
I've recently been exporting my settings from my various IDE's to share with friends. I plan to put them in a shared public location, so I was wondering if IntelliJ exports any sensitive information (saved password hashes, etc) that I should be aware of. If it matters, I'm using the GitHub add-on and the Scala plugin, and the most recent IntelliJ Community Edition.
The reason I ask is (as a new IntelliJ user and longtime Studio user) I know that Studio warns you of possibly sensitive information when you export settings, and excludes it by default. I saw no such warning in IntelliJ and I'm wondering if that's because there's nothing to be worried about, or...
Thanks in advance!
Good question.
I wanted to check it as well, so I exported my settings to look around there. I didn't find any passwords or credentials information there, but there were some things that could be considered "personal":
File header templates - mine for example contained my email there.
Last opened project location path.
Recently opened projects - for "Reopen" dialogue.
RECENT_DIR_STRINGS setting - not sure what are these used for, but it contained around 25 paths of code directories in my file system.
So to sum of - unless you were working on some secret projects on the side, it seems like it's safe to share it with colleagues and friends. Would be careful about releasing the settings to a public domain.
I think this feature is designed to use with their configuration server - a free (if you have the Ultimate edition) service that allows a developer to migrate their settings from one machine to another.
So it probably isn't designed for public sharing.
I'll be using RTC in the near future here at work. My question is: where does it put the files the team members will be working on? I understand that each programmer will work on the projects files and they will push the changes to the main repository. We have a local web server where we test our work (php). So, do we have to configure RTC to publish the files to the web server? or the RTC server must be installed in the webserver so it can save the files?
We use Rational Team Concert almost exactly as you describe, and it works brilliantly. My small team of web developers collaborates on website source code and delivers it to two different streams depending on its readiness: production-stream and staging-stream. Then we have defined two builds that check out the source code, move some things around, and push the files to the web servers via SCP. So, with a few clicks we kick off a staging build, watch it finish in about two minutes and everyone can see the changes on the staging server. When the code is ready for prime-time, the change sets are delivered to production-stream and the production build is kicked off, which is configured to copy the files to the production web server.
But even before a staging or production build is run, any of us can simply configure a local web server in RTC using the Eclipse PDE and Web Tools add-ons and see the site running in localhost as we develop.
All our work is done within Rational Team Concert, from planning, to bug tracking, to source control, to builds. It's very well-suited for website management.
Your understanding is correct - you work on files locally, and they get uploaded on to the server when you checkin. Bear in mind that checkin in RTC terms really means back-up your files to the server, it is a Deliver command that shares the files with others (it is worth a quick look at the articles on jazz.net that explains how SCM works).
One way to pubish to your php server is to make that part of a build, or a build in its own right (which RTC also handles - in conjunction with your favourite build tool). The build would copy the files to the php server. The advantage of doing this as a build is you will know exactly what versions of your files are being copied, and you will be able to reproduce this copy at any point in the future.
You do not need to install the RTC server on the php server.
You can also try posting on the forums on http://jazz.net/ if you have questions on RTC.
Hope that helps.
Another alternative would be to use the command line interface to accept all changes into a workspace and run that with a cron job.
To handle discarded change sets, you'd probably want to use something like:
scm workspace replace-components <workspace-name> stream <uuid-of-stream> --all
after you had initially loaded the workspace on your web server.