Backward change transport from QAS to DEV? - abap

I developed many programs on DEV environment then I transported them to QAS.
Then, a problem occurred and the system administrator restored an old version of DEV without taking a backup of the recent version, so I lost many programs that I developed in DEV.
I am wondering if there is a way to transport them from QAS to DEV?

When you want to avoid using the transport system to restore those objects, then the 3rd party program ZABAPGIT might help you. It allows you to export all development objects from a packet or a transport request into a zip file and then import that zip file in a different system.
This program does not care about the usual transport paths. When you import objects into a system using this program, then they will be treated as if they were created in that system.

Se10 - Create a Transport of Copies.
Manually add the programs
r3tr prog xxxxxx
or for classes
r3tr clas xxxxxx
release the transport with virtual/dummy target (not prod).
In STMS in dev
add the transports to the buffer.
Import them. You will most likely need the overwrite original option.
Be sure you have the write objects ;)
OPTION 2: if it is only source code. And you have Eclipse as IDE.
Connect both QAS and DEV in Eclipse.
Open the object in DEV.
Use the compare with QAS option.
right click in source code "Compare"
Copy the deltas back to dev.

Related

Are there any tools or practices for tracking developer IDE&Tool versions?

I am developing on multiple machines and have the repository and/or project-folder on a private cloud.
I would like to have a file or something, that includes every used tool (NP++ v1.x.x, VS2019 v4.x.x, yEd v2 etc.).
I find the idea of the "package.json" for NPM extremly useful. Maybe their is something similar for OS-Level. (Win10 by the way)
Possible solutions I thinked of:
of course just track it manually
Virtual Machine (which I don't want to use and cannot host anyway)
The tool/practice/extension/whatever should only track some given IDEs/tools. Not setup a OS from zero.

How can I migrate an Aptana workspace between two installations

I am in the process of migrating between two operating systems on the same computer. This has got to the stage of being a multiboot system; both OS are Windows Vista. The aim ultimately is to replace the old (and creaking) system with another Vista OS. One is intended as a working system and the other as an experimental system where I can indulge in some of the hairier techniques I pick up on the web and elsewhere without unduly compromising the working system.
I now have data from the old system replicated in a separate disk partition so that it can be accessed from either OS. Aptana Studio 3 is set up on both systems and I have successfully imported projects from one to the other. However, one component missing from the process is the local history files, so that I cannot get back to previous versions of project files from the new installation. I have discovered that these files are located in %WORKSPACE%.metadata.plugins\org.eclipse.core.resources.history, but have not found a way of accessing them from either installation. Copying them across from either the data partition mentioned above or from the old OS doesn't appear to work, and anyway would defeat the object of being being able to access project files, including their histories, from either system.
What I am looking for is a way of migrating the Aptana workspace in the old system, complete not only with projects (which I have done and which works) but also with their local histories so that both Aptana installations can access them on completely equal terms.
I hope that makes sense. Please bear in mind that I am an enthusiastic amateur rather than an IT professional; that may influence the range and depth of any advice that may be offered.

How to upload new/changed files from development server to the production one?

Recently I started to incorporate good practices in my development workflow, so I split the development server and the production one. I also incorporated a versioning system using Subversion (Tortoise SVN).
Now I have the problem of synchronize the production server (Apache shared hosting) with the files of the last development version in my local machine.
Before I didn't have this problem because I worked directly with the server files through Filezilla. But now I don't know how to transfer the files in an efficient way and what are the good practices in this aspect.
I read something about Ant and Phing but I'm not sure if this appropiate to me or is unnecessary complexity.
Rsync is a cross-platform tool designed to help in situations like this; I've used it for similar purposes on multiple occasions. This DevShed tutorial may be of some help.
I don't think you want to "authomatize" it, rather establish control over your deployment and integration process. I generally like SVN but it has some bugs and one problem I have with it is that it doesn't support baselining -- instead you need to make a physical branch of your repository if you want to have a stable version to promote to higher environments while continuing to advance the trunk.
Anyway, you should look at continuous integration and Jenkins. This is a rather wide topic to which not a specific answer can be given. There are many ins, outs, what-have-yous. Depends on your application platform, components, do you have database changes, are you dealing with external web services or 3rd party APIs etc.
Maybe out there are more structured solutions but with Tortoise SVN you can export only the files changed between versions in a folder tree structure. And then, upload as always in Filezilla.
Take a look to:
http://verysimple.com/2007/09/06/using-tortoisesvn-to-export-only-newmodified-files/
Using TortoiseSVN, right-click on your working folder and select
“Show Log” from the TortoiseSVN menu.
Click the revision that was last published
Ctrl+Click the HEAD revision (or whatever revision you want to
release) so that both the old and the new revisions are
highlighted.
Right-click on either of the highlighted revisions and select
“Compare revisions.” This will open a dialog window that lists all
new/modified files.
Select all files from this list (Ctrl+a) then right-click on the
highlighted files and select “Export selection to…”
Side note:
You have to open more details about your workflow and configuration - applicable solutions depends from it. I see 4 main nodes in game: Workplace, Repo Server, DEV, PROD, some nodes may be united (1+2, 2+3), may have different set of tools (do you have SSH, Rsync, NFS, Subversion clients on DEV|PROD). All details matter
In any case - Subversion repositories have such thing, as hooks, in your case post-commit hook (executed on Repository Server side after each commit) may be used
If this hook (any code, which can be executed in unattended mode) you can define and implement any rules for performing deploy to any target under any conditions. You must only know
Which transport will be used for transferring files
What is your webspaces on servers (Working Copies of just clean unversioned files - both solution have pro and contra sets) - it will define, which deployment-policy ("export" or "update") you have to implement in hook
Some links to scripts, which export files, affected by revision (or range of revisions) into unversioned tree

Making an updates manager module for a program

I'm working on a program that shall have an "updates" module (online). I can't figure out how to do this. Initially i'm trying with a SVN repository. Any better idea? How is this normally done?
(I'm not asking for a concrete languague, i only want an general idea about the procces)
Thank you.
What we do (in an intranet environment) is roughly:
We have an application that (instead of directly starting) points to a little script that fetches the latest 'publicized' version from a known location using rsync.
Then the script simply bootstraps the application itself.
This way:
Everyone always works with the same version of the software.
New builds are easy to deploy: just copy them over to the known 'sync' location.
Using rsync or similar allows you to minify overhead since it works incrementally.
We force the upgrade upon our users, but this mechanism could also be adapted for online (on-demand) updates.

Can you freeze a C/C++ process and continue it on a different host?

I was wondering if it is possible to generate a "core" file, copy if to another machine and then continue execution of the a core file on that machine?
I have seen the gcore utility that will make a core file from a running process. But I do not think gdb can continue execution based on a core file.
Is there any way to just dump the heap/stack and and restore those at a later point?
it's called process migration.
mosix and OpenMosix used to be able to do that. nowadays it's easiest to migrate a whole VM.
On modern systems, not from a core file, no you can't. For freezing and restoring an individual process on Linux, CryoPID and the new Kernel-based checkpoint and restart are in the works, but their abilities are currently quite limited. OpenVZ and other virtualization-like softwares can freeze and restore an entire system.
Also checkout out the Condor project. Condor can do that with parallel jobs as well. Condor also include monitors that can automatically migrate your process when some, for example, starts using their workstation again. It's really designed for utilizing spare cycles in networked environments.
This won't, in general, be sufficient to let an arbitrary process continue on another machine. In addition to the heap and stack state, there may also also open I/O handles, allocated hardware resources, etc. etc.
Your options are either to explicitly write your software in a way that lets it dump state on a signal and later resume from the dumped state, or to run your software in a virtual machine and migrate that to the alternate host - Xen and Vmware both support freeze/restore as well as live migration.
That said, CryoPID attempts to do precisely this and occasionally succeeds.
As of Feb. 2017, there's a fairly stable and mature tool, called CRIU that depends on updates to the Linux Kernel made in version 3.11 (as this was done in Sep. 2013, most modern distros should have those incorporated into their kernel versions).
It can be installed via aptitude by simply calling sudo apt-get install criu.
Instructions on how to use it.
In some cases, this can be done. For example, part of the Emacs build process is to load up all the Lisp libraries and then dump the memory image on disk for quick loading. Some other language interpreters do that too (I'm thinking of Lisp and Scheme implementations, mostly). However, they're specially designed for that kind of use, so I don't know what special things they have to do to allow that to work.
I think this would be very hard to do for a random program, but if you wrote a framework where all objects supported serialisation/deserialisation, you can then serialise all objects used by your program, and then ship that elsewhere, and deserialise them at the other end.
The other people's answers about virtualisation are on the spot, too.
Depends on the machine. It's very doable in a very small embedded system, for instance. I think it's also implemented somewhat in Beowulf clusters and other supercomputeresque apps.
There are lots of reasons you can't do what you want very easily. For example, when you restore the core file on the other machine how do you resolve file descriptors that you process had open? What about sockets, named pipes, semaphores, or any other OS-level resource? Basically unless your system is specifically designed to handle such an operation you can't naively dump a core file and move it to another machine.
I don't believe this is possible. However, you might want to look into virtualization software - e.g. Xen - which make it possible to freeze and move entire system images fromone machine to another.