I'm going to start using Trac for the first time. From what I've gathered, the latest 0.12 is capable of supporting multiple project easily (which is something I will need since I got about 5 projects). However, it seems 0.12 is still in the development (0.12-dev). So, my question is, is it good enough for a newbie in Trac like me to use it? Does anyone has any experience using it ? It will be installed on a Linux server.
BTW, I'll only be using the basic functions such as svn browser, wiki, tickets and others.
0.12 is only going to support a subset of multiple projects (milestone) - you can now connect multiple source repositories with a single Trac environment. you will still need to create your own logic for handling multiple projects inside that single environment, with ticket components or whathaveyou.
i'm running all envs on 0.12 trunk (currently) r9280, i follow trac development timeline and hand pick my next revision to upgrade to, when something important gets a fix. some of my environments have multiple svn and git repositories connected. svn is rock solid, GitPlugin occasionally causes some quirks (rev caching issues mainly), but for me it's all minor compared to the convenience i get.
i would definitely recommend moving straight to 0.12-dev, i've already written a bit about some other benefits over 0.11.
Related
I am developing on multiple machines and have the repository and/or project-folder on a private cloud.
I would like to have a file or something, that includes every used tool (NP++ v1.x.x, VS2019 v4.x.x, yEd v2 etc.).
I find the idea of the "package.json" for NPM extremly useful. Maybe their is something similar for OS-Level. (Win10 by the way)
Possible solutions I thinked of:
of course just track it manually
Virtual Machine (which I don't want to use and cannot host anyway)
The tool/practice/extension/whatever should only track some given IDEs/tools. Not setup a OS from zero.
I want to know what is the difference between installing HortonWorks HDP vs installing the components directly from Apache projects? One thing I can think of is that Horton works probably has the packages aligned so that the version of each component is compatible with that of the others within the suite, while getting them directly from Apache projects, I may have to handle version compatibility myself. Is that correct? Is there any other difference involved ignoring the support subscription aspect of it.
Thanks.
There are a lot of differences between "roll your own" and using a distribution. Some of the most obvious include:
All of the various components and versions have been tested and built to work together - incompatibility between versions (e.g. Hive, Hadoop, Spark, etc.) can be a painful problem to sort through on your own
Most distribution providers, including Hortonworks, will bring patches in from unstable releases into stable releases, so even for the "same" version (e.g. Hive 1.2.1) you're getting a better release than vanilla - these can include both bug fixes and "safe" feature changes
Most distribution providers, including Hortonworks, provide some flavor of centralized platform management. I'm a big fan of Ambari (the one that comes with HDP), for example - it makes configuration and monitoring significantly easier than coordinating a vanilla install
I would strongly recommend against trying to deploy vanilla, unless it's just for learning and playing. HDP community edition is free (both definitions) and a major improvement over doing it yourself. My last deployment of HDP was entirely based on the community edition.
I want to play around and check out Apache's couchdb as a possible back-end for a web-app that I am designing. Therefore I want to have an instance of couchdb, but also to be able to throw it away when the testing is done. The development computer is an Ubuntu laptop (not server). The problems are:
The Ubuntu repository has couchdb 1.0, but the couchdb website strongly recommends to install 1.1, built from source.
I have Erlang built and installed from source, because the Erlang distro from the repository is defective. I don't see the point in installing another Erlang aside it.
couchdb has a lot of dependencies, including a whole bunch of perl libs, that I really don't need, and prefer to throw away when I'm done.
So I am looking for a way to either:
Install couchdb 1.1 as a package that can be easily uninstalled, or
Build couchdb from source, with as few as possible installed dependencies, so when I'm done I can just delete it. Preferably, do this without building another Erlang distro, but configuring it to use the existing one.
Is any of these possible, and how? Thanks in advance.
Btw, I am aware of the build-couchdb project, but from what I read, it requires installing all the build dependencies in advance, which is undesirable, because it will leave a whole bunch of dangling packages in my system, without being a dependency of a couchdb package. It also fetches a copy of Erlang, which is redundant for me.
(Dear moderators: This questions combines issues that relate not only to programming, but also to server administration, Unix software, and, particularly, Ubuntu Linux issues. Therefore, it might be suitable for a few other stack exchange sites. I recon it is most likely to be answered here, since this kind of hackery is often done by programmers. However, if I am wrong, feel free to migrate it, and I apologies in advance for your troubles.)
You could install CouchDB into a chroot jail
A chroot is a way of isolating applications from the rest of your computer, by putting them in a jail. This is particularly useful if you are testing an application which could potentially alter important system files
From the Ubuntu instructions on creating a chroot jail
Another option, assuming your laptop has the appropriate hardware virtualization support, is to use KVM.
The KVM option might be more helpful in the long run as you could move the VM's disk image onto a server.
We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.
I'm wondering how Software Development Team distribute their Standard IDE(s)?
E.g. developing with Eclipse, custom Code formatter, svn Resository, Copyright Header..
At the moment my Team has a standard zip File which is then distributed withhin the developers.
Problem:
If one file, a Plugin or the IDE itself changes, e.g. new Coding Guidlines, Upgrade Eclipse 3.5.1 the whole distribution has to be done again. Every developer needs to unzip the bundel again. Imagine your working with different Workspaces (Jetty, different Tomcamt Versions, WTP) due to Project History That doesn't scale
I know that there are some related Articels
A new version of Eclipse just came out. Is there anything I can do to avoid having to manually hunt down my plugins again?
Manage Your Eclipse Install With A Local Git Repository
And some comercial Programs.
Eclipse also has a new Update-Installer Approach
But I don't see the Killer App. How do your team solve this? Is there a best practice?
I guess best would be a Program letting you choose your current Project and then downloads the configured IDE from the Server and leting you know if Project Config Files are Updated
For eclipse look at Buckminster it targets exactly your target I suppose, didn't use it personally through.
At my previous company they wrote a custom update agent that pulled from a centrally configured server which was updated by the team leaders. It worked well, until people wanted to install their own plugins.
Basically, a developer wanted a plugin, fought in futility to get it included in the default (managed) repo, installed it himself, then updates broke on his machine when the team lead had a sudden stroke of common sense and included it.
They never did come up with a 'good' way to manage it. But, at least they didn't put us all on terminal servers with thin clients.