Changing Visual Studio 2015 Web Test response from hex to JSON - testing

How can the bottom pane of a Visual Studio Web Test response show JSON instead of hex values?

I have not found any way of getting just the json in the bottom panel. Some responses include the "View in html browser" link so clicking that will normally show just the json.
The workaround I normally use is to copy the entire response body from the bottom panel, paste it into a text editor (you could have a text file open in Visual Studio as a work area, but I use Notepad++ for this job) and then remove the hex part of the copied text. Both Visual Studio and Notepad++ support column (or box or rectangular) mode allowing the entire hex part to be selected and deleted. The final action is to join the lines to for one long line. This job is so useful, but so tedious with an editor, that I wrote a little C# program to do it.

There is a way to do it. Create an extraction rule that selects a token. This rule will always return positive so the token does not need to exist. When used, it will format JSON in the response window in JSON only:
namespace WebTestPlugins
{
[DisplayName("Output in JSON")]
[Description("Outputs Viewer in JSON")]
public class OutputInJSON : ExtractionRule
{
public override void Extract(object sender, ExtractionEventArgs e)
{
var response= e.Response.BodyString;
var parseresponse = JObject.Parse(response);
e.WebTest.Context.Add("xxxz", parseresponse.SelectToken("xxxx"));
e.Success = true;
return;
}
}
}

Related

iText 7 need to skip reading page header elements

I am using EventHandler to create page header for my pdf. The content of the header are added into a Table before adding to Canvas. As part of 508 compliance, i need to exclude the header content from being read out loud. How do i accomplice this?
public class TEirHeaderEventHandler : IEventHandler
{
public void HandleEvent(Event e)
{
PdfDocumentEvent docEvent = (PdfDocumentEvent)e;
PdfDocument pdf = docEvent.GetDocument();
PdfPage page = docEvent.GetPage();
PdfCanvas headerPdfCanvas = new PdfCanvas(page.NewContentStreamBefore(), page.GetResources(), pdf);
Rectangle headerRect = new Rectangle(60, 725, 495, 96);
Canvas headerCanvas = new Canvas(headerPdfCanvas, pdf, headerRect);
//creating content for header
CreateHeaderContent(headerCanvas);
headerCanvas.Close();
}
private void CreateHeaderContent(Canvas canvas)
{
//Create header content
Table table = new Table(UnitValue.CreatePercentArray(new float[] { 60, 25, 15 } ));
table.SetWidth(UnitValue.CreatePercentValue(100));
Cell cell1 = new Cell().Add(new Paragraph("Establishment Inspection Report").SetBold().SetTextAlignment(TextAlignment.LEFT));
cell1.SetBorder(Border.NO_BORDER);
table.AddCell(cell1);
Cell cell2 = new Cell().Add(new Paragraph("FEI Number:").SetBold().SetTextAlignment(TextAlignment.RIGHT));
cell2.SetBorder(Border.NO_BORDER);
table.AddCell(cell2);
Cell cell3 = new Cell().Add(new Paragraph(_feiNum).SetBold().SetTextAlignment(TextAlignment.RIGHT));
cell3.SetBorder(Border.NO_BORDER);
table.AddCell(cell3);
canvas.Add(table);
}
}
public static void CreatePdf()
{
using (MemoryStream writeStream = new MemoryStream())
using (FileStream inputHtmlStream = File.OpenRead(inputHtmlFile))
{
PdfDocument pdf = new PdfDocument(new PdfWriter(writeStream));
pdf.SetTagged();
iTextDocument document = new iTextDocument(pdf);
TEirHeaderEventHandler teirEvent = new TEirHeaderEventHandler();
pdf.AddEventHandler(PdfDocumentEvent.START_PAGE, teirEvent);
//Convert html to pdf
HtmlConverter.ConvertToDocument(inputHtmlStream, pdf, properties);
document.Close();
byte[] bytes = TEirReorderingPages(writeStream, numOfPages);
File.WriteAllBytes(outputPdfFile, bytes);
}
}
Note that i have set the document to be tagged. but i still get the "Reading Untagged Document" screen when i open the file. However, all of the content are read including the header when i activate the Read Out Loud feature. Any input or suggestion would be appreciated. Thank you in advance for your help.
General
The approach suggested by Alexey Subach is generally correct. You mark the content as artifact to differentiate it from real content.
element.getAccessibilityProperties().setRole(StandardRoles.ARTIFACT);
This marks the content in the content stream and it excludes the element from the structure tree.
Your case
However, your specific case is more nuanced.
For a well tagged PDF document, the proper way to read it out loud is to process the structure tree, which is a data structure that represents the logical reading order of the (semantic) elements of the document, such as paragraphs, tables and lists.
Because of the way you are creating the header content, it is not automatically tagged: a Canvas instance that is created from a PdfCanvas instance has autotagging disabled by default. So the table in the header is not marked in the content stream and it is not included in the structure tree. Marking it explicitly as an artifact, with the approach described above in General, should not make a significant difference because it was not in the structure tree to begin with.
If you enable autotagging by adding headerCanvas.enableAutoTagging(page), you will notice that the table does appear in the structure tree.
If you then add table.getAccessibilityProperties().setRole(StandardRoles.ARTIFACT), the table is excluded from the structure tree again.
Summary: looking at the structure tree, there's no difference between your original code and the approach of General.
Adobe reading order / accessibility settings
From your description, I think you are using Adobe Acrobat or Reader for the read out loud functionality. Under Preferences > Reading > Reading Order Options, you can configure how the content should be processed for the read out loud feature:
From https://helpx.adobe.com/reader/using/accessibility-features.html:
Infer Reading Order From Document (Recommended): Interprets the reading order of untagged documents by using an advanced method of structure inference layout analysis.
Left-To-Right, Top-To-Bottom Reading Order: Delivers the text according to its placement on the page, reading from left to right and then top to bottom. This method is faster than Infer Reading Order From Document. This method analyzes text only; form fields are ignored and tables aren’t recognized as such.
Override The Reading Order In Tagged Documents: Uses the reading order specified in the Reading preferences instead what the tag structure of the document specifies. Use this preference only when you encounter problems in poorly tagged PDFs.
In my tests, the only way I can make Adobe Reader read out loud the header content created with your original code, is when I select Left-To-Right, Top-To-Bottom Reading Order and enable Override The Reading Order In Tagged Documents. In that case, it is basically ignoring the tagging and just processing the content per the location on the page.
With Override The Reading Order In Tagged Documents disabled, the header content is not read, for your original code and with explicit artifacts.
Conclusion
Although it's a good idea to always tag artifacts as such, so they can be properly differentiated from real content, in this case I believe the behaviour you're experiencing is more related to application configuration than to file structure.
Headers and footers are typically pagination artifacts and should be marked as such in the following way:
table.getAccessibilityProperties().setRole(StandardRoles.ARTIFACT);
This will exclude the table from being read. Please note that you can mark any element implementing IAccessibleElement interface as artifact.

Word to HTML fields in header and footer

I'm using docx4j to convert a Word template to several HTML files, one per chapter.
The Word template has several custom properties mapped by several fields (DOCPROPERTY ...) represented as both simple and complex fields. I populate those properties to obtain Freemarker code when the word document is converted to HTML (like ${...} or [#... /] directives).
In a later step I look for "heading 1" paragraphs to identify chapters and then split the document in several Word documents before conversion, then these documents are converted to HTML and written to temporary files.
Each document is successfully converted to HTML and fields are correctly replaced with my markers, but it behaves wrong when it writes header and footer parts: field codes are written before field values (eg. DOCPROPERTY "PROPERTY_NAME" \* MERGEFORMAT ${constants['PROPERTY_NAME']} ) instead of field values only (eg. ${constants['PROPERTY_NAME']} ).
If I write the updated document to a docx file instead, nothing seems wrong into the generated document.
If it's useful to solve the problem, this is what I do to split the document (per chapter):
clone the updated WordprocessingMLPackage (clone method)
delete every root element before the chapter's "heading 1" element
delete every root element from the "heading 1" element of the next chapter
convert the cloned and cleaned document
(actually I don't use the clone method every time, but I write the updated document to a ByteArrayOutputStream and then read it for every chapter, inspired by the source of the clone method).
I suspect it's for a docx4j bug, did anybody else try something similar?
Finally these are my platform details:
JDK 1.6
Docx4J v3.2.2
Thanks in advance for any help
EDIT
To produce freemarker markers in place of Word fields, I set document property values as follows:
traverse the document looking for simple or complex fields with new TraversalUtil(wordMLPackage.getMainDocumentPart().getContent(), visitor);, where visitor is my custom callback for looking for fields and set properties
traversing the document I look for
FldChar elements with type BEGIN and parse them using FieldsPreprocessor.canonicalise((P) ((R) fc.getParent()).getParent(), fields); (I don't use the return value of canonicalise) where fc is the found FldChar and fields is a empty ArrayList<FieldRef>; then I extract and parse field's instrText attribute
CTSimpleField elements and parse them using FldSimpleModel fldSimpleModel = new FldSimpleModel(); fldSimpleModel.build((CTSimpleField) o, null);; then I use fldSimpleModel.getFldArgument() to get the property name
I look for the freemarker code to show in place of the current field and set it as property value using wordMLPackage.getDocPropsCustomPart().setProperty(propertyName, finalValue);
finally I do the same from step 1 for headers and footers as follows:
List<Relationship> rels = wordMLPackage.getMainDocumentPart().getRelationshipsPart().getRelationships().getRelationship();
for (Relationship rel : rels) {
Part p = wordMLPackage.getMainDocumentPart().getRelationshipsPart().getPart(rel);
if (p == null) {
continue;
}
if (p instanceof ContentAccessor) {
new TraversalUtil(((ContentAccessor) p).getContent(), visitor);
}
}
Finally I update fields as follows
FieldUpdater updater = new FieldUpdater(wordMLPackage);
try {
updater.update(true);
} catch (Docx4JException ex) {
Logger.getLogger(WorkerDocx4J.class.getName()).log(Level.SEVERE, null, ex);
}
After filling all field properties, I clone the document as previously described and convert filtered cloned instances using
HTMLSettings settings = Docx4J.createHTMLSettings();
settings.setWmlPackage(wordDoc);
settings.setImageHandler(new InlineImageHandler(myDataModel));
Docx4jProperties.setProperty("docx4j.Convert.Out.HTML.OutputMethodXML", true);
ByteArrayOutputStream os = new ByteArrayOutputStream();
os.write("[#ftl]\r\n".getBytes("UTF-8"));
Docx4J.toHTML(settings, os, Docx4J.FLAG_EXPORT_PREFER_XSL);
String template = new String(os.toByteArray(), "UTF-8");
then I obtain in template variable the resulting freemarker template.
The following XML is the content of footer1.xml part of the document generated after updating the document properties as described: footer1.xml after field updates
The very strange thing (in my opinion) is that if some properties are not found, step 5 throws an Exception (ok), fields updating stops at the wrong field (ok) and all fields in header and footer are rendered right. In this case, this is the content for footer1.xml.
In the last case, fields are defined in a different way. I think the HTML converter handles well the last case and does something wrong in the first one.
Is there something I do wrong or I can do better?

How to make jedit file-dropdown to display absolute path (not filename followed by directory)?

All is in the title.
If a have opened the three files:
/some/relatively/long/path/dir1/file_a
/some/relatively/long/path/dir1/file_b
/some/relatively/long/path/dir2/file_a
The file dropdown contains:
file_a (/some/relatively/long/path/dir1)
file_a (/some/relatively/long/path/dir2)
file_b (/some/relatively/long/path/dir1)
And that bother me because I have to look on the right to differentiate the two file_a, and on the left for the others. This happens a lot to me mostly because I code in python, and thus I often have several __init__.py files opened.
How do I get jedit to display
/some/relatively/long/path/dir1/file_a
/some/relatively/long/path/dir1/file_b
/some/relatively/long/path/dir2/file_a
config:
jedit 5.1.0
java 1.6.0_26
mac osx 10.6
Unfortunately this is not easily possible currently, I just had a look at the source and this is not configurable.
You can:
Submit a Feature Request to make this configurable (good idea in any case)
Create or let create a startup macro that
registers an EBComponent with the EditBus that listens for new EditPanes getting created
retrieve the BufferSwitcher from the EditPane
retrieve the ListCellRenderer from the BufferSwitcher
set a new ListCellRenderer to the BufferSwitcher that first calls the retrieved ListCellRenderer and then additionally sets the text to value.getPath()
Try the Buffer List plugin as to whether it maybe suits your needs
Now follows code that implements the work-part of option two, runnable as BeanShell code which does this manipulation for the current edit pane. The third line is not necessary when done in an EBComponent, this is just that the on-the-fly manipulation is shown immediately.
r = editPane.getBufferSwitcher().getRenderer();
editPane.getBufferSwitcher().setRenderer(
new ListCellRenderer() {
public Component getListCellRendererComponent(list, value, index, isSelected, cellHasFocus) {
rc = r.getListCellRendererComponent(list, value, index, isSelected, cellHasFocus);
rc.setText(value.getPath());
return rc;
}
});
editPane.repaint();

MonoDevelop: Reformatting Code - Tune Down

MonoDevelop 3.0 is great, but it's a little too over zealous on reformatting.
When I enter the code:
var serializer = new XmlSerializer (typeof(TDestination),extraTypes);
Then press CTRL+SHIFT+F to format the code, it change these lines to put one parameter on each line making it hard to read when the function calls or lambda expressions are long.
var serializer = new XmlSerializer (
typeof(TDestination),
extraTypes
);
How do I get MonoDevelop to leave the line breaks as they are?
Go to Preferences >> Source Code >> Code Formatting >> C# Source Code. Select a custom policy, press the "C# Format" tab, and press the "Edit" button. I hope that helps!

Silverlight localization from database, not resx

I have a Silverlight 4 OOB application which needs localizing. In the past I have used the conventional resx route but I have been asked to follow the architecture of an existing winforms app.
All the strings are currently stored in a database - I use a webservice to pull these down and write them into a local Effiproz Isolated Storage database. On Login I load a Dictionary object with the language strings for the users language. This works fine.
However, I want to automate the UI localization (the WinForms app does it like this):
Loop through all the controls on the page and look for any Textblocks - if there is a text property I replace it with the localized version. If the text is not found, then I WRITE the string to the database for localization.
This works ok on simple forms but as soon as you have expanders/scrollviewers and content controls then the VisualTree parser does not return the children of the controls as they are not necessarily visible (see my code below). This is a known issue and thwarts my automation attempt.
My first question is: Is there a way of automating this on page load by looping through the complex (non-visual) elements and looking up the value in a dictionary?
My second question is: If not, then is the best way of handling this is to load the strings into an app resource dictionary and change all my pages to reference it, or should I look into generating resx files, either on the server (and package it with the app as per normal) or on the client (I have the downloaded strings, can I make and load resx files?)
Thanks for any pointers.
Here is my existing code that does not work on collapsed elements and complex content controls:
public void Translate(DependencyObject dependencyObject)
{
//this uses the VisualTreeHelper which only shows controls that are actually visible (so if they are in a collapsed expander they will not be returned). You need to call it OnLoaded to make sure all controls have been added
foreach (var child in dependencyObject.GetAllChildren(true))
{
TranslateTextBlock(child);
}
}
private void TranslateTextBlock(DependencyObject child)
{
var textBlock = child as TextBlock;
if (textBlock == null) return;
var value = (string)child.GetValue(TextBlock.TextProperty);
if (!string.IsNullOrEmpty(value))
{
var newValue = default(string);
if (!_languageMappings.TryGetValue(value, out newValue))
{
//write the value back to the collection so it can be marked for translation
_languageMappings.Add(value, string.Empty);
newValue = "Not Translated";
}
child.SetValue(TextBlock.TextProperty, newValue);
}
}
Then I have tried 2 different approaches:
1) Store the strings in a normal dictionary object
2) Store the strings in a normal dictionary object and add it to the Application as a Resource, then you can reference it as
TextBlock Text="{Binding Path=[Equipment], Source={StaticResource ResourceHandler}}"
App.GetApp.DictionaryStrings = new AmtDictionaryDAO().GetAmtDictionaryByLanguageID(App.GetApp.CurrentSession.DefaultLanguageId);
Application.Current.Resources.Add("ResourceHandler", App.GetApp.DictionaryStrings);
//http://forums.silverlight.net/forums/p/168712/383052.aspx
Ok, so nobody answered this and I came up with a solution.
Basically it seems that you can load the language dictionary into your global resources using
Application.Current.Resources.Add("ResourceHandler", App.GetApp.DictionaryStrings);
<TextBlock Text="{Binding [Equipment], Source={StaticResource ResourceHandler}}" />
and then access it like a normal StaticResource. We have the requirement of noting all our missing strings into a database for translation - for this reason I chose to use a Converter that calls a Localise extension method (so it can be done on any string in the code behind) which then looks up the string in the Dictionary (not the resource) and can do something with it (write it to a local DB) if it does not exist.
Text="{Binding Source='Logged on User', Converter={StaticResource LocalizationConverter}}"/>
This method works ok for us.