How do I publish php source code to a local web server in rational team concert? - rtc

I'll be using RTC in the near future here at work. My question is: where does it put the files the team members will be working on? I understand that each programmer will work on the projects files and they will push the changes to the main repository. We have a local web server where we test our work (php). So, do we have to configure RTC to publish the files to the web server? or the RTC server must be installed in the webserver so it can save the files?

We use Rational Team Concert almost exactly as you describe, and it works brilliantly. My small team of web developers collaborates on website source code and delivers it to two different streams depending on its readiness: production-stream and staging-stream. Then we have defined two builds that check out the source code, move some things around, and push the files to the web servers via SCP. So, with a few clicks we kick off a staging build, watch it finish in about two minutes and everyone can see the changes on the staging server. When the code is ready for prime-time, the change sets are delivered to production-stream and the production build is kicked off, which is configured to copy the files to the production web server.
But even before a staging or production build is run, any of us can simply configure a local web server in RTC using the Eclipse PDE and Web Tools add-ons and see the site running in localhost as we develop.
All our work is done within Rational Team Concert, from planning, to bug tracking, to source control, to builds. It's very well-suited for website management.

Your understanding is correct - you work on files locally, and they get uploaded on to the server when you checkin. Bear in mind that checkin in RTC terms really means back-up your files to the server, it is a Deliver command that shares the files with others (it is worth a quick look at the articles on jazz.net that explains how SCM works).
One way to pubish to your php server is to make that part of a build, or a build in its own right (which RTC also handles - in conjunction with your favourite build tool). The build would copy the files to the php server. The advantage of doing this as a build is you will know exactly what versions of your files are being copied, and you will be able to reproduce this copy at any point in the future.
You do not need to install the RTC server on the php server.
You can also try posting on the forums on http://jazz.net/ if you have questions on RTC.
Hope that helps.

Another alternative would be to use the command line interface to accept all changes into a workspace and run that with a cron job.
To handle discarded change sets, you'd probably want to use something like:
scm workspace replace-components <workspace-name> stream <uuid-of-stream> --all
after you had initially loaded the workspace on your web server.

Related

serve oracle service cloud Customer portal locally?

I am working on customizing the oracle service cloud customer portal, but since OSvC provides only WebDAV to connect to it. It is very time-consuming to edit files and then upload them to WebDAV even for a single word change.
I am looking for a solution to serve it locally make desired changes and then upload the desired code to webDEV.
But after searching the file structure I can not make which framework it uses, I tried to use websites like https://builtwith.com/ and WhatRuns but they are also not able to find anything useful.
Although after searching in the file structure, I find some files of CodeIgnitor but the structure is way more different than the CodeIgnitor folder structure.
The short answer is no, you will not be able to run Customer Portal locally. While it is a fork of CodeIgniter from many years ago, there are server-side dependencies that will prevent you from running it in a local sandbox.
That said, it is possible to automate many of the manual tasks of interacting with WebDAV for change testing. If you edit locally, then you can use scripting hooks or event RPA robots to automate some of the manual file movement. Personally I have a flow to edit remotely in my test environment with an editor (like VSCode or Nova) that can connect to a remote server via WebDAV and edit files directly in the development area of a site. Then, when finished, I have a script that pulls down the latest version of all files and then allows me to commit changes to Git for SCM.
Another option is RPA. You can develop a robot that can be run to automate the manual tasks that you face in your workflow. Personally, I think that scripting is a better solution than RPA since you can automate all of the actions via scripting or a shell. But, it's another option to consider.
Another way of "Live editing" the OSvC CP code is to connect to WebDav via a software that supports it like Mountain Duck which uploads your code to OSvC on save.
OR use the better solution Windows Explorer which supports connecting to WebDav and treating it like a network drive, by going on My Computer -> Computer -> Map Network Drive then put https://yoursite.custhelp.com/dav/cp click Next then you'll be promoted to login using your OSvC login.

Complex system and Vagrant

On production, we have a web infrastructure as it follows:
Load Balancer (haproxy)
API Server (PHP + apache)
Frontend Server (Javascript + nginx)
MySql Server
Redis Server
I'd love to start using Vagrant to make production environments exactly the same as the development ones, plus making it easy for a new developer to jumpstart doing his job.
The big question is: how should I build the box?
Should I put everything in one box or should I build more boxes? And how many?
It depends on convention you've reached with developers. Ask yourself one question: in what type of structure do you wish to work: distributed or centralized.
If answer is "distributed", you can make one box per one project. You won't mess up when you will nedd to up any project that got over last modification few time ago. But this method intakes much memory and storage space and sometimes it doesn`t make sense, if the most part of your projects based on the same production environment.
If answer is "centralized", it means one box per all projects builded on the same environment is enough for you. It saves plenty of time, but also it`s easy to confuse, when you're looking for an old project. You can set up Docker container per each project in your Vagrant box.
Additionally I'd like to suggest you Packer usage for box building. That`s definitely clear instrument for this goal, it can make "ready to work" Vagrant box for every virtualization environment and execute shell scripts/CMS scripts. Just put in box everything essential for production environment, later developers can add some package dependencies through Vagrant provisioning and share it by Vagrantfile settings.

How to organize mixed HTTP server + web client Dart project files?

I'm planning to create a pure Dart application where both the HTTP server and the web client side is written in Dart. Coming from Java and Eclipse the ultimate would be that i can open the whole project hierarchy in Dart Editor and be able to run the server which serves the client files and debug both sides of the app (server side with the DartVM and client side with Dartium).
I've fired up Dart Editor and after creating a simple Command-line application as the basis for the server side i got confused with the project layout.
The direct server side code files (web server boostrap class, handler and filter classes) are definietly going into the projects bin/ folder. Server side dependencies are going into the project's pubspec.yaml file.
The problem arrises when the server have to access the client application files (.dart files, static page source, etc.) in order to serve them to the browser. The easiest solution would be to create a web folder inside the server project and put client web files there, but this way (as far as i understood) server side dependencies are inherited into the client because we are still in the same pubspec scope. I don't want this.
I thought about creating a client library in the projects lib/ folder and put web files there but i don't know how good practice is to put a complete web application into there. I guess i have to put HTML and other client static files into the asset/ subfolder of the lib. I'm affraid that i'm loosing web application assist from the IDE this way.
What i might also be able to do is to put the client into a separate project, organize it like a Dart webapp project with it's very own pubspec.yaml and then make this the dependency of the server application somehow. I don't know if this way the server could access web files in the other project for serving. Probably this is the best way of doing it because it provides a clean separation of the client and server files.
Can somebody enlighten me what's the correct way of doing this?
Some more explanation.
Say i'm going with the separate project approach as others already suggesting in the answers but i still like to run the server which is able to serve the client in the development phase without any fancy hack. The server has to access the client files in the other project. It doesn't matter if its Javascript or Dart, the static files are there anyway. And during development i wish to serve the dart files since Dartium speeds up development with it's direct Dart running capability significantly.
With Java and Maven i can make the client package a runtime dependency of the server and i can simply serve the client files from the classpath. Does Dart support accessing a pub dependency's internal files the similar way or the only way for this is to put everything into the asset folder of the client or going with the relative path hack?
This is work in progress:
prepare a Dart app for server-side deployment
To improve the development experience you may use a symlink as a workaround so that you have the client files available in a directory of the server package.
I suggest creating a feature request at http://www.dartbug.com/new for better support.
I would go for two separate projects.
You won't need to make the client package a dependency on the server package.
The server only needs to know where the directory with the build output of the client package is.
Which files to serve is usually requested by the client.
The client requests e.g. index.html and all further dependencies (.dart, .hmtl, .js, .img, .css, ...) are hard-coded in this file and therefore the server should not need to know any further details beforehand.
I'd suggest organising two separate projects. There are a few things that you might profit from if you use this approach. The most obvious there's no coupling between client and server, you get a very clear separation. The other one is that your server can evolve independently of the client. Dart applications will need to be compiled to javascript. In the end you will have a dart server app serving javascript files (+ maybe dart files if you decide to do so). Some of the packages that you use on the server side are not available in dartium - you don't want to have to deal with this dependency mess. Your server might consist of more then just one app, maybe your server will have a module in java or some other language. Keeping this two project separately gives you a lot more flexibility.

Eclipse RCP Target Platform: Updates & Backing Up

I've just created an eclipse target definition/platform for my application, opting to use software sites (rather than local files/installations) as recommended in the tutorial I followed and a later best practices post by the same author.
The software sites are all external sites (eclipse, sourceforge etc.)
Everything seems to be working well, though I have two concerns:
If a component is updated (by the software provider), will it also be updated automatically in the target definition file?
Is it possible to take a backup of the target platform, so that it can be configured (for example) on a computer without an internet connection, or used in the event a remote site becomes unavailable.
You can create a mirror of an Eclipse p2 repository. It's quite common to do this inside an organisation so that there's a copy of the repository that's quick to access, and isn't dependant on some third party continuing to host it. There's a guide on the Eclipse Wiki.
As far as I'm aware, your Target Definition can only reflect what's in the p2 repository it's pointing at. If the developer replaces a package with a newer version, it'll pick that up. If you need greater control over that, then selectively mirroring the content is probably the way to go.
From that wiki page, it looks like by default it won't delete content in your mirror (even if it's deleted in the remote) unless you specify -writeMode clean.

How would I created a flexible EC2 Windows 2008 boot script?

If you look at the Linux ecosystem (especially the Ubuntu and Alestic EC2 images) there is a common technique where the VMs are pre-configured to look at the EC2 user-data and use it as a boot script. The nice thing about this approach is that you can write a boot script that further provisions your machine, allowing you to avoid making a new image every time your software that runs on the machine changes.
I want to do the same thing for Windows, but given that I'm an Mac and Linux guy, I'm a bit lost on where to start. My requirements are:
This must run on Windows Server 2008
A bootstrap script needs to start when the machine boots up, read the user-data file by pulling down the contents http://169.254.169.254/1.0/user-data
The bootstap script then needs to run the contents of that file as if it were a script
The script embedded in the user-data needs to run in such a way that it has access to the desktop environment (ie: it can launch a browser, etc).
I'm not quite sure how services work in Windows or if I need to enable auto-login, so any advice here would be appreciated. The ultimate goal is to run a Java program that launches some custom software that in turn launches a web browser (IE, Firefox, etc) and is capable of taking screenshots.
The screenshot part is interesting, because in the past when I've tried this the only way I could get something other than a black screen was to have UltraVNC or RealVNC boot up as a service, though I don't know why that helped.
I'm looking for answers to three specific questions, as well as any general advice:
Should I be focussing on a Windows service or auto-login + bat file in the "Startup" folder?
If I use a Windows service, is there anything special that I need to do to make sure desktop access and/or screenshots are available?
Do you recommend any tools for common Linux commands, like curl or wget? Last time I used Windows I used Cygwin a lot, but is there something more appropriate to use here?
I have not tried auto-login on Windows instances in EC2, but here's the support document on how to enable it.
We boot-strap our Windows instances using a custom AMI with a custom Windows 'install' service already installed. The boot-strap installer reads a URL from user-data at startup. The URL points to a ZIP file stored in S3. The installer then downloads, un-zips, and executes the actual application installer -- in our case a simple CMD fie.
This setups allows us to have one base AMI and then be able to easily overlay 15+ different application configurations (without having to rebuild the AMI). If you only have one application configuration this may be overkill for your situation.
The only trouble we ran into was having our installer service start to early -- changing the service startup mode to "Automatic Delayed" fixed that issue.
We wrote our boot-strap installer in Java, launched via YAJSW, because we're comfortable with it. If you just want a few simple Unix tools, most are available pre-compiled for Windows, for example wget.
For something completely different, you could try PsExec to configure the instance after it has booted.
You can try using RightScale's free developer account to create plain Powershell scripts and associate them with your Windows instances to run at boot time. The RightScale dashboard solves exactly the problems you are trying to solve above.
DISCLAIMER: I work for RightScale.
As for screen capture CutyCapt is a simple tool you can point at a URL and generate an image from.
Unxutils is a great solution for those looking for unix tools on Windows. It's got the wget.exe that you're looking for, however, using Powershell to download stuff is not so bad either:
$wc = new-object system.net.webclient
$wc.DownloadFile("http://stackoverflow.com","test.html")
If you can write a batch file to do your setup, then you can run it at startup of the vm by doing this:
1. Run REGEDT32.EXE.
2. Modify the following value within HKEY_CURRENT_USER:
Software\Microsoft\Windows NT\CurrentVersion\Winlogon\ParseAutoexec
1 = autoexec.bat is parsed
0 = autoexec.bat is not parsed
As an answer to #3, I would say that you can do just about anything in a batch file that you need which includes downloading from a ftp server (but not from a http server). I am really interested in this stuff and so if you have questions, try asking me.
If you use Elastic Beanstalks you can use this:
Customizing the Software on EC2 Instances Running Windows
It uses YAML formatting standards, e.g.
packages:
msi:
mysql: http://dev.mysql.com/get/Downloads/Connector-Net/mysql-connector-net-6.6.5.msi/from/http://cdn.mysql.com/
or
sources:
"c:/myproject/myapp": http://s3.amazonaws.com/mybucket/myobject.zip
I know this is a little bit late to help out with the original post but for anyone who is still reading this one solution is to use the http://cloudinitnet.codeplex.com/ project. The service is easily installed using a powershell script and will create a local administrator account to use while running.
The goal for this project was to replace the Cloud-Init project used in Amazon Linux and Ubuntu.