In Eloqua, can you send out an email to a contact list but version the "hero" image headline for each segment using dynamic content blocks?
And then can you do the reverse, have the main image remain the same, and dynamically populate products below that they've purchased in the past?
For scenario 1, yes that is possible out of the box.
Scenario 2 however is a bit more complicated and would generally require a 3rd party tool to provide this type of dynamic code generation based upon a lookup table (in this case a line item inventory or purchases). Because a contact could have zero or more products (commonly as individual records in a CDO), you would generally need to aggregate or count the number of related records, and then generate your HTML table and formatting around those record values, and be contextually aware if it is the first or last record (to begin and close the table). Dynamic content does not have mathematical functions and would not be able to count those related records - this is something usually provided by a B2C system like SFMC using ampscript or dynamically generated through custom code and sent through a transactional SMTP service. You could have multiple dynamic content on top of each other, but your biggest limitation becomes the field merge, with only lets you select a record based upon earliest/last creation date, or last modified. This is not suitable if you have more than 2 records. A third party service that provides a cloud content module for your email is your best bet.
Related
I have two dynamic product groups
First: Test Product with variants
Conditions: Product Is equal to Variant product
Result total 7 like I expect this
Second: Active Products
Conditions: Active yes
we allready see that the stream ids are just set to 5 products
Now we get a total of 5 instead of 15 products like expected?
Why is it inconsistent, and how can I modify my request to consider also the variants?
You shouldn't rely on the stream_ids column as an indicator which product is shown in a dynamic product group at any given moment. This is because there are multiple more things that factor into whether a product is shown to a user in a dynamic product group.
The filters you define for the group resolve to an SQL query, which in simplified terms would yield something like WHERE active = 1 AND id IN ('...', '...'). So the stream_ids column isn't used to select the contents of a group, but the entire query including all filters is executed in the storefront request. The result of that query is what you see in the preview of the dynamic product group.
Why doesn't it correlate completely with the content of stream_ids?
Shopware features inheritance of fields. If fields of a variant haven't been assigned a value, they may inherit that value from their parents. This may not be reflected in the contents of stream_ids. In fact the children/variants may even inherit the contents of stream_ids.
Then there's the fact that contents of the product group may vary, depending on the current sales channel. That may be because the sales channel features a different language, hence the content of a translatable field used in a filter may vary. Also if you use price filters, there is the possibility of products with multiple prices, which might only be shown if certain conditions are met, defined by the rule builder.
In short, don't count on the stream_ids, which can't reflect all these variables but are used in some capacity internally, for invalidating caches and such. Instead use the preview to judge what the average user might find when they see a product group. There's also the possibility to choose which sales channel the preview should apply to, for the exact reason, that contents may differ depending on the sales channel.
MarkLogic 9.0.8.2
We have around 20M records in MarkLogic.
For one of the business requirement, we need to generate additional data for each xml and then need user will search this data.
As we can't change original document, so need input on what is best way to manage additional data. Following are the few which we have thought of
Create separate collection and store additional data in separate xml with same unique number i.e. same as original xml. So when user search for it, search in this collection and then retrieved original documents and send response back.
Store additional data in original document properties
We also need to create element range index to make sure it works when end user provide data in range operators.
<abc>
<xyz>
<quan>qty1</quan>
<value1>1.01325E+05</value1>
<unit>Pa</unit>
</xyz>
<xyz>
<quan>qty2</quan>
<value1>9.73E+02</value1>
<value2>1.373E+03</value2>
<unit>K</unit>
</xyz>
<xyz>
<quan>qty3</quan>
<value1>1.8E+03</value1>
<unit>s</unit>
</xyz>
<xyz>
<quan>qty4</quan>
<value1>3.6E+03</value1>
<unit>s</unit>
</xyz>
</abc>
We need to process data from value1 element. User will then search for something like
qty1 >= minvalue AND qty1<=maxvalue
qty2 >= minvalue AND qty2<=maxvalue
qty3 >= minvalue AND qty3<=maxvalue
So when user will search for qty1 then it should only get data from element where value is qty1 and so on.
So would like to know
What is best approach to store data like this
What kind of index i should create to implement this
I would recommend wrapping the original data in an envelope, which allows adding extra data in the header. It could also allow creating a canonical view on the relevant pieces of the data, and either store that as instance, and original as 'attachment' (sub-property, not an attached binary), or keep the instance as-is, and put canonical values for indexing in the header.
There is a lengthy blog article about the topic, that discusses pros and cons in high detail: https://www.marklogic.com/blog/envelope-design-pattern/
HTH!
Grtjn's answer would be the recommended solution, as it is more performant to keep all the information inside the document itself, versus having to query across both the document with the properties, but it would require changes to the document.
Option 1 & 2 could both work.
Properties documents already exist, so it doesn't add fragments, but the properties must conform to the schema.
Creating a sidecar document provides more flexibility, because you are creating new documents, it will increase number of fragments.
I have few tables as shown below
Polls
PollId Question Option
1 What 1
2 Why 4
Updates
UpdateId Text
1 Sleep
2 Play
Polls and updates are just two sample tables (In reality there are more tables like ,photos, videos,links etc). But when a user visit his home (like facebook new feed) he must be displayed with data relevant to him (no such data included in this example). ie I want to select data from all tables with less number of query executions. (ie, I want to present a mixture of datas, ie polls, photos, videos etc )
Currently, I'm fetching only ids and type (ie which table) from all of the tables and gather further data while iterating through this resultset. (ie from c# calling another SqlQuery) .
Is there a way to query the data from whole tables at once? (OUTER JOIN?, UNION?)
Or simply,
How can I select different type of entities at once in a single sql Query?
You could write your query so that you have one long select list for everything you want and it all comes back in one result set but I suspect that wouldn't work too well because you might have varying numbers of different types of items per user.
If you really must have it all in one hit then you can issue multiple queries in one go and get multiple result sets back. To handle this you can use an ADO.Net DataSet. See this SO example (but not the accepted answer - see Vikram Dibyal's answer as that gives a very basic overview of what I think you're asking for).
I won't copy and paste the stuff from the linked thread, just head over and take a look.
I have what I consider a real need to create a query with several hundred columns.
We are working on a mailing for our client. In this mailing, they are listing out several locations where their customers can go to get information. As our designers create the template for this mailing, they are setting up "Slots" for each address. The number of slots on the mailing varies from one mailing to the other, from 6 to possibly 50.
My need for the query is to setup the merge of data into the mailing. I need to provide a query where each mailing is 1 record containing all the information they need for that mailing. I am dynamically creating the SQL statement with the max number of slots on that mailing. With up to 50 slots on that mailing, my query needs to look like this:
MailingID,
LogoLocation,
APNCode,
TFN,
CopyVersion,
Slot1_Name,
Slot1_Address,
Slot1_City,
Slot1_State,
Slot1_DateTime,
...
Slot50_Name,
Slot50_Address,
Slot50_City,
Slot50_State,
Slot50_DateTime
My first attempt was to create a table with all these fields, but I got this error:
The table has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
They only want the data in a CSV file, so I don't need to create a temp table for it.
My problem is that I'm trying to create a standard process and with the number of fields varying like that, I want to set this up in a way that we won't blow up the system every time we try and run it.
I've looked at a few pages and found details on the size limitations of SQL Server and several comments saying a table like this shows a bad database design.
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.105).aspx
http://social.msdn.microsoft.com/Forums/en-US/fec1efbb-94ff-4fe9-8d69-12e95c48587d/its-maximum-row-size-exceeds-the-allowed-maximum-of-8060-bytes-insert-or-update-to-this-table-will?forum=transactsql
Work around SQL Server maximum columns limit 1024 and 8kb record size
I'm hoping that someone out there has some experience doing this and can share some insights on how to make this efficient. Is there another way to accomplish this that I don't know about?
UPDATE:
Thanks for all the quick replies.
More detail on my scenario. You get a flyer in the mail and when you turn the flyer over, it lists 50 locations in your county where you could go take a class or attend a meeting or something. All the details for that flyer needs to be in 1 record so they can map the fields on the one page. If that county has 50 address/date/time combinations, they need them included in the 1 record so they can properly slot the flyer. Think giant mail merge where there might only be 100 counties (100 flyers) but each flyer has tons of information.
When the data is actually stored in the database, I'm storing an id for the specific flyer (MailingID) and each address/date/time combo is its own record. It's just the file they need to merge the details onto the creative piece that has to be denormalized like this.
I haven't been able to find any details on limitations on views. Does a View have the same limitations as a table? Would it work to create a view for them that they can download when they need the data?
All the details for that flyer needs to be in 1 record so they can map the fields on the one page That is a questionable assumption. Why can't the data be stored in 50 rows in a 2nd table?
Anyway, if you insist on storing everything in one row you should probable use XML or JSON. That makes all these problems go away. SQL Server has great support for XML. You can even generate XML on the fly. So you could properly store the 50 items in a 2nd table and only combine them into one XML value for query purposes.
The are some articles are written in several parts,
for example, I got those articles from IBM developer works:
Distributed data processing with
Hadoop, Part 1:Getting started
Distributed data processing with
Hadoop, Part 2:Going further
Distributed data processing with
Hadoop, Part 3: Application
development
I will index those three articles separately. And some one search certain keywords, it is possible the part3 is on the top of hit whle part1 is on the 32th. Therefor, if I list results page by page, the part1 and part3 will display on different page.
How can I make sure the hitted documents in the same series displayed together?
I guess in SQL, we can use "group by".
I believe what you are asking for is Field Collapsing, which is currently a trunk feature in Solr, and will be incorporated into the next Solr version.
If you want to roll your own, One possible way to do this is:
Add a "series id" field to each document that is a member of a series. You will have to ensure that this gets incremented for every new series.
Make an initial query to Lucene, and get a hit list.
For each hit, check to see if it has a series id; If it does, make another query by the series id in order to retrieve all the members of the series.
An alternative is to store the ids of all the series members in a field inside each member's document.