I have a feature which provisions a custom Default.aspx file to my publishing site.
In the elements.xml, I have various AllUsersWebPart nodes which populate the zones with web parts. I need to supply a hardcoded ID (guid) to each of these web parts at this point -- does anyone know how to do this?
I know that the format of a webpart ID is g_00000000_0000_0000_0000_000000000000, but if I add an ID property (see below) and then activate my feature, the guids of the webparts are all different.
<AllUsersWebPart WebPartZoneID="TopZone" WebPartOrder="1">
<![CDATA[
<webParts>
<webPart xmlns="http://schemas.microsoft.com/WebPart/v3">
<metaData>
<type name="..." />
<importErrorMessage>...</importErrorMessage>
</metaData>
<data>
<properties>
<property name="ID" type="string">g_FB777184_F9AB_4747_AA71_1BF0C96E535A</property>
</properties>
</data>
</webPart>
</webParts>
]]>
</AllUsersWebPart>
FYI: I need to hardcode an ID for each of my web parts because I have a seperate feature receiver which uses the ID to locate each web part on the page (I have 6x identical parts on the page with the same title) and then it assigns an audience to each part (so that only 1 is visible at any given time to a user).
You were so close. Try using the "ID" attribute of the "AllUsersWebPart" element (or if provisioning a List View Web Part using the "View" element):
<View List="Lists/Announcements" BaseViewID="0" WebPartZoneID="Top" ID="g_42fa735b_3cae_47d2_a773_b591b9abfd7a" />
or
<AllUsersWebPart ID="g_21f871c5_3575_4182_a7e2_64f682877071" WebPartOrder="1" WebPartZoneID="Top">
<![CDATA[
<webParts>
<webPart xmlns="http://schemas.microsoft.com/WebPart/v3">
<metaData>
<type name="Microsoft.SharePoint.WebPartPages.XsltListViewWebPart, Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" />
<importErrorMessage>Cannot import this Web Part.</importErrorMessage>
</metaData>
<data>
<properties>
<property name="ListUrl" type="string">Lists/Tasks</property>
<property name="ExportMode" type="exportmode">All</property>
</properties>
</data>
</webPart>
</webParts>
]]>
</AllUsersWebPart>
(Completely alternative solution to your original problem, not really an answer to the question):
Why don't you use the SPWebPartManager (from within the Feature Receiver) to grab the web parts from the page, and set your properties that way?
I'm assuming the 6x identical ones will be in the same zone, placed one after another, so you could just iterate through the WebParts in the manager class where the the parts belong in a particular Zone.
Good and bad news...
The approach I was using is in fact valid, however it will only work if your web parts are not placed in a web part zone. If placed in a zone, the web part infrastructure automatically sets the WebPart.ID value (confirmed on MSDN).
In order to hard-code the web part IDs, you need to place your web parts directly in the page layout (so that you can place them in directly within the page's markup).
Personally, I have opted to leave the web parts in the zones and instead use unique titles which I can use later in my feature reciever to identify the parts.
Related
After consulting the JsNLog docs and the 2 posts I found on SO (this and that) there were still question marks about the best solution for logging the RequestId coming from JsNLog for Javascript log entries. So far I came up with this:
(1) In _Layout.cshtml use the JsNLog taghelper, but place the HttpContext.TraceIdentifier there directly, instead of using JSNLog.JavascriptLogging.RequestId(ViewContext.HttpContext) which does the same under under the hood:
<jl-javascript-logger-definitions request-id="#ViewContext.HttpContext.TraceIdentifier" />
(2) Configure NLog to used the WhenEmpty Layout Renderer. It will log the HttpContext.TraceIdentifier for all events where the header "JSNLog-RequestId" (submitted by JsNLog javascript) is not set:
<target xsi:type="File" name="allfile" fileName="log-all-${shortdate}.log"
layout="${aspnet-request:header=JSNLog-RequestId:whenEmpty=${aspnet-TraceIdentifier}} ${longdate}| ... other entries ... />
Make sure the the NLog.config contains
<extensions>
<!-- Make these renderers available: https://nlog-project.org/config/?tab=layout-renderers&search=package:nlog.web.aspnetcore -->
<add assembly="NLog.Web.AspNetCore"/>
</extensions>
This way we find the same RequestId (aka TraceIdentifier) for server side and client side log entries which belong together.
This solution works, and feels quite okay (after #Rolf Kristensen's comments were implemented).
Does anyone have a better one?
Even though I'm setting Compavility version in request header (967), when I'm making a call (GeteBayDetails in that case), the response comes with version higher than I need and want (979). These applies to both app I'm currently developing and even API Test Tool. Is there something that I'm missing? Or the Version tag in response isn't related to Compability Level?
Header:
X-EBAY-API-SITEID:212
X-EBAY-API-COMPATIBILITY-LEVEL:967
X-EBAY-API-CALL-NAME:GeteBayDetails
Body:
<?xml version="1.0" encoding="utf-8"?>
<GeteBayDetailsRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<RequesterCredentials>
<eBayAuthToken>...</eBayAuthToken>
</RequesterCredentials>
</GeteBayDetailsRequest>
And the response:
<?xml version="1.0" encoding="UTF-8"?>
<GeteBayDetailsResponse
xmlns="urn:ebay:apis:eBLBaseComponents">
<Timestamp>2016-09-27T11:21:41.341Z</Timestamp>
<Ack>Failure</Ack>
<Errors>
<ShortMessage>Nieznany błąd.</ShortMessage>
<LongMessage>Nieznany błąd.</LongMessage>
<ErrorCode>17460</ErrorCode>
<SeverityCode>Error</SeverityCode>
<ErrorClassification>RequestError</ErrorClassification>
</Errors>
<Version>979</Version>
<Build>E979_INTL_API_18061441_R1</Build>
</GeteBayDetailsResponse>
PS. As far as I know, request fails because of the newer version of the API. And worked before like a charm. Thats why I want to stick to 967.
What you are seeing is normal behavior in that the response will always contain the most recent API schema that could service your request. I encounter many calls for which there are no applicable execution differences between the requested schema and the performing schema, for a given particular request. Also this returning "latest schema version that could service the API request" behavior is how you can determine if you can move up your compatibility level safely, as support drops off periodically.
Of course when the response has a lower schema than the latest in the release notes for the API, then you know you are in a situation where at some point you have to change your code to reflect what has been deprecated or changed before the support for the last schema that can service your particular request ends.
This eBay DTS article mentions this Information in the API Response
as well as going over the eBay API schema versioning process.
Also, be sure on XML POST requests to specify the API schema version in the request itself using the Version tag, not just the HTTP header as with the example call code for the GeteBayDetails API documentation:
<?xml version="1.0" encoding="utf-8"?>
<GeteBayDetailsRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<!-- Call-specific Input Fields -->
<DetailName> DetailNameCodeType </DetailName>
<!-- ... more DetailName values allowed here ... -->
<!-- Standard Input Fields -->
<ErrorLanguage> string </ErrorLanguage>
<MessageID> string </MessageID>
<Version> string </Version>
<WarningLevel> WarningLevelCodeType </WarningLevel>
</GeteBayDetailsRequest>
Hope this helps
I only want Nutch to give me a list of the urls it crawled and the status of that link. I do not need the entire page content or fluff. Is there a way I can do this? Crawling a seedlist of 991 urls with a depth of 3 takes over 3 hours to crawl and parse. I'm hoping this will speed things up.
In the nutch-default.xml file there is
<property>
<name>file.content.limit</name>
<value>65536</value>
<description>The length limit for downloaded content using the file
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the http.content.limit setting.
</description>
</property>
<property>
<name>file.content.ignored</name>
<value>true</value>
<description>If true, no file content will be saved during fetch.
And it is probably what we want to set most of time, since file:// URLs
are meant to be local and we can always use them directly at parsing
and indexing stages. Otherwise file contents will be saved.
!! NO IMPLEMENTED YET !!
</description>
</property>
<property>
<name>http.content.limit</name>
<value>65536</value>
<description>The length limit for downloaded content using the http
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the file.content.limit setting.
</description>
</property>
<property>
<name>ftp.content.limit</name>
<value>65536</value>
<description>The length limit for downloaded content, in bytes.
If this value is nonnegative (>=0), content longer than it will be truncated;
otherwise, no truncation at all.
Caution: classical ftp RFCs never defines partial transfer and, in fact,
some ftp servers out there do not handle client side forced close-down very
well. Our implementation tries its best to handle such situations smoothly.
</description>
</property>
These properties are ones I think may have something to do with it but i'm not sure. Can someone give me some help and clarification? Also I was getting many urls with the status code of 38. I cannot find what that status code indicates in this document. Thanks for the help!
Nutch performs parsing after fetching the URL, to get all the outlinks from the fetched URL. The outlinks from a URL are used as the new fetchlist for the next round.
If parsing is skipped, no new URLs might be generated and hence no more fetching.
One way I can think of is to configure the parse plugins to include only the type of content you require to be parsed(in your case its the outlinks).
One example here - https://wiki.apache.org/nutch/IndexMetatags
This links describes the features of the parser https://wiki.apache.org/nutch/Features
Now, to get only the list of the URLs fetched with their statuses you can use the
$bin/nutch readdb crawldb -stats command.
Regarding the status code of 38, looking at the document you have linked, it seems like the status of the URL is
public static final byte STATUS_FETCH_NOTMODIFIED = 0x26
Since, Hex(26) corresponds to Dec(38).
Hope the answer gives some direction :)
Recently, I use apache velocity for view template in spring framework, and in order to escape the HTML entity, I introduced the "org.apache.velocity.tools.generic.EscapeTool", however, then I found the variable named with "$application" cannot work now, that any variable named with "$application" shows blank, e.g. "$!application.name".
When I removed the velocity tool configuration, "$application" can be read correctly. So anyone knows if the "$application" a reserved word in velocity escape tool or I make a mistake when configuration?
Toolbox config:
<toolbox>
<tool>
<key>esc</key>
<scope>application</scope>
<class>org.apache.velocity.tools.generic.EscapeTool</class>
</tool>
</toolbox>
Config in spring-beans XML:
<bean id="viewResolver"
class="org.springframework.web.servlet.view.velocity.VelocityViewResolver">
<property name="cache" value="true" />
<property name="exposeSpringMacroHelpers" value="true" />
<property name="toolboxConfigLocation" value="/WEB-INF/toolbox.xml" />
</bean>
In template file:
<div class="description">
<h2>Application Name:$!application.name</h2>
</div>
Thanks in advance!
The EscapeTool does not put anything in the context, so it is not overriding your $application variable. To find out what is overriding any variable, you might try
$application.class.name
VelocityTools does automatically return the servletContext when $application is used in a template, but (in the case of Tools 2.0) you can configure whether you want to prefer user-set variables (the default) or the servlet api objects. I don't offhand recall if that can be configured in Tools 1.4, but i'm sure you can look it up.
In any case, in Tools 2.x, it is not reserved, but it is also does come with a default value. Since it is acting as those it's reserved, i'm guessing you either turned off userOverwrite or else are using Tools 1.4.
I am new to struts. I am wondering what input variable here signifies. After some googling, the only conclusive piece of info was this:
Input: The physical page (or another ActionMapping) to which control should be forwarded when validation errors exist in the form bean.
Is there any other use for the input parameter besides the case of an error occurring?
<action
roles="somerole"
path="some/path"
type="some.java.class"
name="somename"
input="someInput"
scope="request"
validate="false"
parameter="action">
<forward name="success" path="some/path"/>
<forward name="download" path="/another/path"/>
</action>
Yes, although you're correct that it's primarily a forward for failed validation.
The input has a dedicated method to return it: ActionMapping.getInputForward(). This can be used in custom (Java-based) validation to return to the input page.
It can also be used to identify a "landing" page: an action base class or custom request processor might send GET requests to the input forward, and process POSTs normally.