Does WCAG 2.0 allow tables for layout, I can't see anything in the guidelines saying you can't but this suprises me.
http://www.w3.org/TR/WCAG20-TECHS/H73.html
Although WCAG 2 does not prohibit the
use of layout tables, CSS-based
layouts are recommended in order to
retain the defined semantic meaning of
the HTML table elements and to conform
to the coding practice of separating
presentation from content
I'd add to #jordan answer that imbricated tables are inaccessible.
A layout table must avoid using any element used in data tables, that is:
no caption, thead, tfoot and th elements (only table, tr, td and optionally tbody)
summary attribute (it can be empty), headers, scope (only rowspan and colspan. And id obviously)
strange things like axis or col element
It must also be linearizable, that is, must make sense when cells are read from top to bottom and left to right.
Relevant WCAG 2.0 Failure Techniques are:
F46: Failure (...) due to using th elements, caption elements, or non-empty summary attributes in layout tables
F49: Failure (...) due to using an HTML layout table that does not make sense when linearized
Note: if you're using tables only for design constraints, then FYI you can use display: table; (and table-row and table-cell) on every browser except IE6 and IE7.
Related
What I'm trying to achieve here is to load some fields from sub-entities.
For instance, let's suppose i want to load some features for the product list. In xml it's pretty clear:
<row-actions>
<entity-find-one entity-name="mantle.product.feature.ProductFeature" value-field="brandList">
<field-map field-name="productFeatureId" from="featureList.productFeatureId"/>
<field-map field-name="productFeatureTypeEnumId" from="featureList.productFeatureId" value="PftBrand"/>
</entity-find-one>
</row-actions>
Is there a way to do something similar in groovy, without iterating through the whole product list and add the desired fields manually?
Also, can somebody give me a short example with the concrete use of sqlFind(http://www.moqui.org/javadoc/org/moqui/entity/EntityFacade.html) ?
I tried to solve the issue i'm asking about using a join query but I couldn't figure out how the SQL query is supposed to look like.
a. The element 'entity-find-one' queries on primary key and returns a single map. You need to use the 'entity-find' element .
b. Yes, you can always drop down to groovy using the script tag. e.g. Just use ec.entity.find("mantle.product.feature.ProductFeature") or whatever you need in your groovy script.
c. In moqui, joined tables are handled by the 'view-entity' element and you can predefine your own (place in your 'entities' folder) or use the many existing ones that are provided in the framework. You don't need SQL.
EDIT - Sorry, you can also do it on the fly by using the EntityFind.makeEntityDynamicView() method.
Hope that helps.
I have a solr index generated from a catalog of PDF files and correspoing metadata fields pertaining to the pdf files themselves. Still, I would like to provide my users an option to exclude in the query any text indexed from within a PDF. This is so the query results would be based on the metadata fields instead and not biased by the vast text within the pdf files.
I have thought of maybe having two indexes (cores) - one with the indexed pdf files and one without.
Is there another way?
Sounds like you are doing a general search against a default field. Which means you have a lot of copyField instructions (or just one copyField * -> text), which include the PDF content field.
You can create a second destination and copyField everything but the PDF content field into that as well. This way, users can search against or another combined field.
However, remember that this parses all content according to the analysis chain of the destination field. So, eDisMax with a list of source fields may be a better approach there. And, remember, you can use several request handlers (like 'select') and define different default parameters there. That usually makes the client code a bit easier.
You do not need to use 2 separate indexes. You can use the edismax parser and specify the qf parameter at query time. That will help determine what fields are searched.
You can look at field aliases
If you have 3 index fields
pdfmeta
pdftext
Then you can create two field aliases
quicksearch : pdfmeta
fullsearch : pdfmeta, pdftext
One advantage of using a field alias over qf is if your users have bookmarks like q=quicksearch:value, you can change the alias for quicksearch without affecting the user's bookmark.
I feel we can improve the search results from our help site (tested few terms and not seeing relevant results on the first page) and I am exploring our options.
We use Apache Solr Search and after reading around it seems like we can improve the results by tweaking the Field Bias. Here is the list of the field available. Some of the fields are self-explanatory but I do not know what others mean. For eg. Path alias, tm_vid_2_names, etc .
The full, rendered content (e.g. the rendered node body)
Title or label
Path alias
Body text inside links (A tags)
Body text inside H1 tags
Body text inside H2 or H3 tags
Body text inside H4, H5, or H6 tags
Body text in inline tags like EM or STRONG
All taxonomy term names
tm_ds_search_result
tm_vid_11_names
tm_vid_12_names
tm_vid_16_names
Taxonomy term names only from the Tags vocabulary
tm_vid_21_names
tm_vid_26_names
tm_vid_2_names
tm_vid_3_names
tm_vid_4_names
tm_vid_5_names
tm_vid_6_names
tm_vid_9_names
Extra rendered content or keywords
Author name
Author name (Formatted)
The rendered comments associated with a node
Thank you very much for your help.
It's impossible to say what the meaning behind all those fields are without knowing your domain and what is actually relevant information. I'd start by actually looking at how people are using your search, and what they're searching for - and then start tweaking how much to boost each field to get more relevant results than what you're seeing now.
If you're using the dismax or edismax query handlers (which you probably are), you can tweak the weights and boosts of each field by applying weights in the list of fields to query: qf=field^10 field_2^5 field_3. This would search all three fields, but give more weight to hits in the first field than the second and third.
In your case you'd probably want to give more boost to anything in the title, h1, h2, h3, etc. fields, as they're probably better descriptors of the content, as well as the taxonomy fields. The body field shouldn't be considered very important (so no boost is probably a good start), except to make sure that you're finding the document if it's a rarely used term.
You can append debugQuery=true to your query to see exactly how the results are scored and why a certain document ranked above another in the search results.
It's impossible for anyone without specific knowledge of your data and search patterns to say anything exact about which fields to include and their weights.
I would like to write / read a text file using the FileHelpers library.
However I have doubts on how to proceed when the file has several headers, footers and details.
The structure of my file is as follows:
FileHeader
AHeader
ADetail
ADetail
ADetail
AFooter
BHeader
BDetail
BDetail
BFooter
CHeader
CDetail
CDetail
CDetail
CDetail
CFooter
FileFooter
Does anyone know indicate a possible way to solve this?
You can use MultiRecording engine to read or write a file with many different layouts.
http://www.filehelpers.net/example/Advanced/MultiRecordEngine/
Out of the box, using FileHelpers for a format that complex would be difficult.
FileHelpers provides two methods of handling multiple record types: the master/detail engine and the multi-record engine.
Unfortunately it is likely that you need both for your format. It would be hard to combine them without some further coding.
To be clear
the MasterDetailEngine caters for header/footer situation, but it currently supports only one detail type and only one level of nesting.
the MultiRecordEngine allows multiple record types. However, it treats each row as an unrelated record and the hierarchy (that is, which detail record belongs to which master record) would be hard to determine.
I'm searching against a table of news articles. The 2 relevant columns are ArticleTitle and ArticleText. When I want to search an article for a particular term, i started out with
column LIKE '%term%'.
However that gave me a lot of articles with the term inside anchor links, for example <a href="example.com/*term*> which would potentially return an irrelevant article.
So then I switched to
column LIKE '% term %'.
The problem with this query is it didn't find articles who's title or text began/ended with the term. Also it didn't match against things like term- or term's, which I do want.
It seems like the query i want should be able to do something like this
'%[^a-z]term[^a-z]%
This should exclude terms within anchor links, but everything else. I think this query still excludes strings that begin/end with the term. Is there a better solution? Does SQL-Server's FULL TEXT INDEXING solve this problem?
Additionally, would it be a good idea to store ArticleTitle and ArticleText as HTML-free columns? Then i could use '%term%' without getting anchor links. These would be 2 extra columns though, because eventually i will need the original HTML for formatting purposes.
Thanks.
SQL Server's LIKE allows you to define Regex-like patterns like you described.
A better option is to use fulltext search:
WHERE CONTAINS(ArticleTitle, 'term')
exploits the index properly (the LIKE '%term%' query is slow), and provides other benefit in the search algorithm.
Additionally, you might benefit from storing a plaintext version of the article alongside the HTML version, and run your search queries on it.
SQL is not designed to interpret HTML strings. As such, you'd only be able to postpone the problem till a more difficult issue arrives (for example, a comment node that contains your search terms as part of a plain sentence).
You can still utilize FULL TEXT as a prefilter and then run an HTML analysis on the application layer to further filter your result set.