In HP ALM Table SMART_REPOSITORY_LOGICALFILE there is a column SRLF_PARENT_PATH
I need to know where the path segments come from since I'm experiencing massive issues after upgrade...
Sample:
.\hist\REQ\0000\0042\0123\4567\3
I only know where the last two segments are coming from:
"4567" is the Requirement Id
"3" is the version number of the requirement
But where does 0000\0042\0123 come from?
There is no physical path like that on the entire server...
Prior to ALM 11.0 the path of the repository content was exposed and the logical path matched the physical path. Since 11.0 they started using smart repository. Those two segments are the nested directory levels in the logical path(not the physical OS one). Note that there is a direct access to the logical path via FTP which you can activate via a Site Admin parameter using the FTP_PORT param.
Related
Can some one explain the lookup operation in NFS v3.0 in detail. Operations occurring on client side and server side.
This is something you should probably read up about and too generic for stackoverflow. The RFC says the following:
The LOOKUP procedure is used by the client to traverse
multicomponent file names (pathnames). Each call to LOOKUP is
used to resolve one segment of a pathname. There are two reasons
for restricting LOOKUP to a single segment: it is hard to
standardize a common format for hierarchical file names and the
client and server may have different mappings of pathnames to
file systems. This would imply that either the client must break
the path name at file system attachment points, or the server
must know about the client's file system attachment points. In
NFS version 3 protocol implementations, it is the client that
constructs the hierarchical file name space using mounts to
build a hierarchy. Support utilities, such as the Automounter,
provide a way to manage a shared, consistent image of the file
name space while still being driven by the client mount
process.
See https://www.ietf.org/rfc/rfc1813.txt for more information.
I have generated webgrapgh db in apache nutch using command 'bin/nutch webgraph -segmentDir crawl/segments -webgraphdb crawl/webgraphdb'.... It generated three folders in crawl/webgraphdb which are inlinks, outlinks and nodes. Each of those folders contained two binary files like data and index. How to get visual web graph in apache nutch? What is the use of web graph?
The Webgraph is intented to be a step in the score calculation based on the link structure (i.e webgraph):
webgraph will generate the data structure for the specified segment/s
linkrank will calculate the score based on the previous structures
scoreupdater will update the score from the webgraph back into the crawldb
Be aware that this program is very CPU/IO intensive and that will ignore the internal links of a website by default.
You could use the nodedumper command to get useful data out of the webgraph data, including the actual score of a node and the highest scored inlinks/outlinks. But this is not intented to be visualized, although you could parse the output of this command and generate any visualization that you may need.
That being said, since Nutch 1.11 the plugin index-links has been added, which will allow you to index into Solr/ES the inlinks and outlinks of each URL. I've used this plugin indexing into Solr along with the sigmajs library to generate some graph visualizations of the link structure of my crawls, perhaps this could suit your needs.
Firstly apologies if this question seems like a wall of text, I can't think of a way to format it.
I have a machine with valuable data on(circa 1995), the machine is running unix (SCO OpenServer 6) with some sort of database stored on it.
The data is normally accessed via a software package of which the license has expired and the developers are no longer trading.
The software package connects to the machine via telnet to retrieve data and modify data (the telnet connection no longer functions due to the license being changed).
I can access the machine via an ODBC driver (SeaODBC.dll) over a network, this was how I was planning to extract the data but so far I have retrieved 300,000 rows in just over 24 hours, in total I estimate there will be around 50,000,000 rows total so at current speed it will take 6 months!
I need either a quicker way to extract the data from the machine via ODBC or a way to extract the entire DB locally on the machine to an external drive/network drive or other external source.
I've played around with the unix interface and the only large files I can find are in a massive matrix of single character folder (eg A\G\data.dat, A\H\Data.dat ect).
Does anyone know how to find out the installed DB systems on the machine? Hopefully it is a standard and I'll be able to find a way to export everything into a nicely formatted file.
Edit
Digging around the file system I have found a folder under root > L which contains lots of single lettered folders, each single lettered folder contains more single letter folders.
There are also files which are named after the table I need (eg "ooi.r") which have the following format:
<Id>
[]
l for ooi_lno, lc for ooi_lcno, s for ooi_invno, id for ooi_indate
require l="AB"
require ls="SO"
require id=25/04/1998
{<id>} is s
sort increasing Id
I do not recognize those kinds of filenames A\G\data.dat and so on (filenames with backslashes in them???) and it's likely to be a proprietary format so I wouldn't expect much from that avenue. You can try running file on these to see if they are in any recognized format just to see...
I would suggest improving the speed of data extraction over ODBC by virtualizing the system. A modern computer will have faster memory, faster disks, and a faster CPU and may be able to extract the data a lot more quickly. You will have to extract a disk image from the old system in order to virtualize it, but hopefully a single sequential pass at reading everything off its disk won't be too slow.
I don't know what the architecture of this system is, but I guess it is x86, which means it might be not too hard to virtualize (depending on how well the SCO OpenServer 6 OS agrees with the virtualization). You will have to use a hypervisor that supports full virtualization (not paravirtualization).
I finally solved the problem, running a query using another tool (not through MS Access or MS Excel) worked massively faster, ended up using DaFT (Database Fishing Tool) to SELECT INTO a text file. Processed all 50 million rows in a few hours.
It seems the dll driver I was using doesn't work well with any MS products.
I need to dual boot 2 different task sequences (Win7 images) for different Pc types which require different drivers, we have 2 images one for staff and student which can be added to a particular task sequence.
I need to create a portable solution for cloning without the network using 2 different SCCM (System Center Configuration Manager) task sequences. At the moment I go through the usual steps of creating a boot media via the Configuration manager, but there seems to be know way to create a script that changes the task media on the fly so you can select which OS image.
I was looking at a possible solution using YUMI (a Usb boot tool) but each bootable image requires an ISO. The task sequence image is around 8GIG.
We use SCCM 2007. (Still awaiting for a budget to upgrade to 2012 :) )
It sounds like you want to boot two different .WIM images.
Out of the box, I haven't found any tool from MS that will allow this. I have gotten around this discrepancy by renaming the .WIM I want to use to BOOT.WIM in the \SOURCES directory.
That is the name of the .WIM that gets used by all the default settings. You have to rename the file before you attempt to boot from the USB device, but it doesn't take long and could be scripted without much effort.
Theoretically, it should be possible to configure the BCD on the USB device (\EFI\MICROSOFT\BOOT\BCD or BOOT\BCD, depending on how the computer is configured to boot) so that you could choose which .WIM to use at boot time without the need to do any messy renaming. I haven't gotten this to work yet (mostly due to lack of time/urgency), but I did write down what I had done so far. I found some useful information about booting to .WIM's from windowsitpro.com.
I'm writing a MIDlet which needs to write file. I'm using FileConnection from JSR-75 to accomplish this.
The intention is to have this MIDlet runnning on as much devices as possible (all MIDP 2.0 devices with JSR-75 support, ideally).
On several emulators and an HTC Touch Pro2, I can perfectly use the following code to get the root of the filesystem:
Enumeration drives = FileSystemRegistry.listRoots();
String root = (String) drives.nextElement();
String path = "file:///" + root;
However, on a Nokia S60 5th edition emulator, trying to open a FileConnection to this path throws a java.lang.SecurityException. Apparently S60 devices do not allow connections to the root of the filesystem. I realise I can use something like System.getProperty("fileconn.dir.photos"), but that isn't supported on all devices either.
So, my actual question: what is the best approach to get a path to create a FileConnection with, that allows for maximum portability?
Thanks.
Edit:
I suppose I could iterate over all the roots in the Enumeration, and check for a writable one, but that's hardly optimal for two reasons. First, there aren't necessarily any writable roots. Second, this could be the phone memory or a memory card, so the storage method wouldn't be consistent across devices, which is rather ugly.
You are supposed to open read-only connections to roots in order to find out what folder they contain.
As a general rule, when opening a read_write connection to a folder throws a SecurityException, try to open a read-only connection to browse through sub-folders in order to find a writable one.
Specifically on Symbian (and other platforms advanced enough to provide secure data cages to your MIDlets), you can use System.getProperty("fileconn.dir.private"); to find a writable area.
I will tell you what we do. We have a test app that just finds out the file system root and the SD card root if applicable. We set this as a jad parameter. The code reads it from the Jad file. Since you dont need to recompile the jar for different devices this works out very well, just change the jad parameter for a handset with different file system root.