JetBrains Aqua - Multiple HTTP Requests - testing

I want to test one HTTP endpoint with thousands of different request bodies which I can generate in Excel. I also want to compare results (or results fields) to expected results which I will have in that Excel file too. Can I do this in JetBrains Aqua or will I need another tool?
I've tried googling and searching for some tools.

Related

How to compare content between two web pages in different environments?

We are in the process of building a website from scratch from an existing website. The web page is an identical copy, and as the web page contains many pages we need a way to compare content between the sites. It is of course possible to do manually, but it takes both a lot of time and entails a risk of human errors.
I have seen that there are services that offer this by inputting two URLs which are then analyzed and where discrepancies are presented. However, these cannot be used as our test environment is local (built in Sitecore).
Is there a way to solve this without making our test environment available online (which is not possible)? For example, does software exist for this, or alternatively some service where you can compare a web page that is online with one that is local?
Note that we're only looking for content comparison (not visual).
(Un)fortunately there's many ways to do this, but fortunately there are some simple ones.
What I would do is:
Get a list of URLs for each site. If the Sitemap is exhaustive, then you could use that, if it's not you might want to run some Sitecore Powershell to get the lists.
Given the lists (from files, or Sitecore API or something), write a program to visit each URL, get the text of the page after it's done rendering, and save it to disk (something like Selenium is good for this and you can use any language). You'll want some folder structure like host/urlpart/urlpart/pagename.txt, basically the same as your content tree.
Use some filesystem diff program like WinMerge to compare the two folders
This is quick and dirty, but a good place to start.

Getting Search Server to ignore sharepoint document data, and speed up crawl times

Background:
I have a Sharepoint Foundation 2010 installation that is being used to store scanned images of paper documents, making an electronic version of paper file folders we keep for each of our company's Clients. All of the documents are all stored as PDF files.
The configuration includes a web-server housing Sharepoint and the Search Server 2010 Express service, as well as separate database server housing the content data as well as the search crawl store. Both the Sharepoint/Search box, and the SQL box are VMware VMs running on shared hosts (including a shared SAN) with our other production servers.
Each file added to sharepoint must be added through a custom interface, including metadata tags for client information (a site content type with a set of site columns defines this extra metadata). We then expose this client identifying data with the search server by setting Managed Properties so we can do queries against the search webservice specifying WHERE CustomClientID = X.
Our data currently resides in two large document libraries, one for each arm of the company.
After a few years of operation our server now has some 250,000 documents and we are having issues with full crawls (running weekly off hours) sometimes crashing part way through, and our incrementals (running every 5 min during work hours) take 7-8 minutes to pick up 2-3 new files.
Question:
I was wondering if there was a way to get the search server crawler to only pick up the metadata we are supplying and ignore the document contents entirely, which I assume would speed up the crawl process by orders of magnitude. I believe this feature is described as full text search, but have not been successful in finding anything that explains if this is something that can be turned off.
If not, is there an alternative option for speeding up crawl times that anyone would advise?

How to merge two content source in Sharepoint 2010?

In my share point 2010 website, I added two content source
file system (shared folder)
BDC data (Line of Business Data)
I added the managed properties to map the metadata of the BDC data.
My search result coming link this
I would like to link the two content source, my second content source having the file related information like (tab, category, fileno, case name)
I added the column and also I altered the xslt in the search result web part. the results are coming link below.
From the result, the third one (120) is coming from the database so all the properties are mapped (caseid, casename,fileno, doctab, description)
But it's not mapping to the file system. The file system having relationship with the table with the file name and also the the path of the files having some information:
file://192.168.25.231/FolderName/CaseID/documenttab/filename
CaseId is the primary key for the table which I added as second content source.
How can I achieve this?
Hmm, it's difficult to add much more without seeing the environment. But here's plan B
Given you're using the BCS and want to display both unstructured content (the files) and application data that shares metadata with the files, you could try the following. It will require some coding knowledge. You can make connections between web parts in SharePoint Designer but this will need Visual Studio
create a custom search results web page, and use the standard core search results web part along with separate data web part for displaying the application data
create a custom query box for entering the search query, probably best done with separate fields for the metadata - case ID, case name etc. (You'd normally use a data filter web part, but that won't pass results through to the normal search results - you need to code to run two queries)
format and pass the query to both the core search results web part, and the BCS data web part, to display items that match the query
That's probably as much as I can help with. The SharePoint section on MSDN should be the next port of call. Good luck!
This may be an overly simplistic explanation to keep the response as short as possible.
For your search results page, the best approach when also retrieving application data is to not present that information in the core search results web part. Exclude it from the default scope. Instead, use a federated search results web part added to the results page. You'll also need to create the corresponding federation location for the scope (easy to do), and you can then use XSLT to style the display of the results - application data needs to be presented differently to links to files and web pages.
Then, a search for say the case ID, will display all files containing that information in the core search results web part, and will display any matching application data in the federated results web part, with the different formatting applied. Note - there will be no connection between the two. The only relationship is that they both match the search query. It is possible to connect web parts to filter one based on the selected value in another, but it's an entirely different approach and not easily done using search results.

SSRS 2008: How to generate multiple reports immediately?

Im building a site which brings up SSRS reports by opening new windows with the report url and report parameters. I can currenlty open a window for each report they want to run.
However, they also want the option to save the reports to a file share or Sharepoint of their choice, instead of having a bunch of browser window pop-ups for each report.
I understand I can use SSRS web services to setup a schedule (to run in a couple minutes from the time of request) which can save those files to a file share (or Sharepoint) but that seems like a hack to get a one time generating of reports onto a file share or sharepoint.
Is there any other way to instantly generate a bunch of reports, one time, immediately, without having to set them up on a scheduler that is set to run a couple minutes from the time they set it up?
"Note, they DO NOT want one report that has all the reports in it, these are seperate reports that are already built, and they want one file/window per report."
Not sure what you want when you say you want them all at once but one file window/per report? What presentation layer is showing this? You can make three seperate web calls at the same time to the webservice instead of the hosting site:
h ttp://(servername)/(ReporstServer)/PathtoReport1
h ttp://(servername)/(ReporstServer)/PathtoReport2
h ttp://(servername)/(ReporstServer)/PathtoReport3
instead of
h ttp://(servername)/(Reports)
If you just mean 'separate pages' on an Excel workbook you can do that with one report nesting other sub reports. You can build a master report that has rectangle objects that define pages as their properties and place a sub report in each of these rectangles.
Or you could make an html page that references the calls three seperate times in a 'form' object of the HTML doing a 'post' command.
< Form id="SSRSRender" action="http://(servername)/(reportServer)/(report) method="post" target="self">
"However, they also want the option to save the reports to a file share or Sharepoint of their choice, instead of having a bunch of browser window pop-ups for each report.
I understand I can use SSRS web services to setup a schedule (to run in a couple minutes from the time of request) which can save those files to a file share (or Sharepoint) but that seems like a hack to get a one time generating of reports onto a file share or sharepoint."
That's not a hack, that is the preferred method of saving a file is using the built in web service scheduler. Once a report is hosted (on a server hosting the SSRS) it can have configs set for SMTP send outs, file saves, and snapshots made.
If that is not enough you can create your own proxy classes if you want in C# or VB.NET and try to build your own front end talking to SSRS through SOAP requests to the Web Service.

configure multiple servers and scale

I have been given a task configure 1000 of servers with some simple data. Lets say I need to login to server (linux or windows) and setup the ntp server. I need to come up with some kind of automation framework using perl. I have some ideas and want to get more.
Here is my thought process:
a) Since there are 1000s of servers, definitely the framework should be able to read in a csv file so all inputs can be provided as apposed to single input.
b) Since there are so many servers, I have to find a way to do things in parallel. I cant go server by server in a sequential way
c) I should have some output file that shows the results of all the servers that I successfully configured, servers that failed. That way I can compare input file and output file and generate a report
Should I consider anything else in my framework ?
How can I do parallel processing using perl ?
Even if you want to stick with Perl, it looks like there are already some alternatives available that would keep you from implementing another framework from scratch.
Check out the comments from http://my.opera.com/cstrep/blog/2010/05/14/puppet-fabric-and-a-perl-alternative for a couple options.