Enterprise Architect -> How to get the edge of the end node using a SQL Query on the .eap-File (.mdb) - sql

I have to draw some EA diagrams using only the .eap file without an installed EA on the server. So I am opening it as MDB-File via ODBC.
I know there is the attribute t_diagramlinks.Geometry (with edge={1,2,3,4}) and the Attribute t_connector.Start_Edge and the attribute t_connector.End_Edge.
The table t_diagramlinks with the attribute Geometry is diagram-dependent.
The table t_connector with the attributes .Start_Edge and .End_Edge is not diagram-dependent --> there could be connections that haven't been drawn on a diagram.
I know that the SX, SY, EX, EY of t_diagramlinks are coordinates relative to the origin of each node that is drawn on a diagram.
Problem: EX / EY sometimes is zero and isn't painting the end line to the edge of the node. I guess it has something to do with mouse release position.
"My Interpretation" below is what my renderer produces based on my assumptions.
"EA Interpretation" is what EA is actually rendering and which I would like to see in my renderer as well.
Questions
I am using the csv-Value EDGE in t_diagramlinks.Geometry - but where
do I find this for the end node?
For which purpose are the attributes Start_Edge an End_Edge in the
table t_connector when it is not diagram-dependent?

I am using in t_diagramlinks.Geometry the csv-Value EDGE - but where
do i find this for the end node?
You need to use Euclide. SX,SY/EX,EY are relative shifts from the shortest center connection between start and end element.
For which purpose are the attributes Start_Edge an End_Edge in the
table "t_connector" when it is not diagram-dependend?
They are used for qualified properties.
Edit: To elaborate a bit more on your basic issue. t_diagramlinks.path holds bending points for the connector (if any are specified). So in order to find the point where a connector actually joins an element you have to find the nearest bend to that element. Now between this bend and the middle of the element you will have a natural attachment point. Relative to that the SX-Y (/EX-Y) are added to make the manually shifted rendered attachment point.
The above goes with a grain of salt. I never verified the nitty gritty but used my stomach from seeing the numbers. I might look into that in detail to update my Inside book but can't promise.
2nd Edit: Now that I know that "My Interpretation" is what your renderer produces based on your assumptions, here's the (most likely; see above) story. In order to render a connector EA will use the following information:
the frame coordinates of the two connected elements
calculated from the coordinates the middle point of the elements
the path property of the connector (if not empty)
the nearest bending point to the relevant elements (if not empty)
the shift factors SX-Y and EX-Y for the connector
Starting from the start element's middle point you draw a virtual line to either the nearest bending point or the middle of the end element (unless see below). That way you can calculate the virtual attachment point at the element's rectangular frame (even use cases have a rectangular frame). Now you shift that point by SX-Y which will (/should?) always travel along the edge of the element frame. Now you have the virtual attachment point for the start element.
On the other side (the end element; my "unless" from above) you would do a similar thing to calculate the virtual attachment for the end. What I don't know is the real order in which EA is doing that (I have no code insight). So if you have manual offsets on both sides the calculation will give different results depending on the order to draw the virtual connection to the other side (so: is the shift on the other side respected or not). Basically I think you can neglect that for 99.9% of all cases and the rest is just irrelevant noise.
So now you know the virtual end points you either connect them directly or, if a path is give, you connect them via the bending points.
Again: all with a grain of salt. It's just observation from the outside but likely not too far away. There's also the fact that you have different line styles with rounded edges (not taken into account here) and bezier lines (even more dragon land).

Thank you very much. I choosed to calculate the End-Edge mathematically:
My problem was, that i wasn't able to detect the end edge of the target endnode on a link relation. Further more there are 8 different link types, that are determining the layout of the links. So as Thomas mentioned i had to detect the last point before the endpoint. If it is a path, the last node of the path is the point before the endpoint. If there is not a path, the startnodes starting point is the last point before the endpoint. But if there is a path defined and the link mode has been set to 1 i may not process the connection path, because the Conn_Path property is containing a customized line - but after customizing the user has selected to have a direct link (it will not be deleted).
The mathematic behind is used as linear function y=m*x+b and the lines are described by 4 straight lines which are appropriate to the edges of the endnode.
So you can do the following algorithm:
The complete algorithm is using the following approach:
1.) Determine straight line between start- and endnode (there are 2 special cases if the line is completely horizontal or vertically parallel to the coordinate system)
2.) Create a rectangle which consists of four straight lines (2 vertical / 2 horizontal lines)
3.) Determine the intersection of the first straight line with the rectangles lines
4.) Exclude point that are not consisting to the rectangle
5.) Determine the point on the rectangle with the shortest distance => this is the searched end edge point
The written javascript code i have used to do the routing is the following:
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Erzeuge eine eigene Link-Klasse für das Routing der Pfeile, die von Hand gezogen wurden
// und über spezielle Attribute in der EAP-Datei definiert werden
// Ruft die Superklasse von go.Link im Konstruktor auf
function MultiNodePathLink() {
go.Link.call(this);
}
go.Diagram.inherit(MultiNodePathLink, go.Link); // Erben von go.Link
// ignores this.routing, this.adjusting, this.corner, this.smoothness, this.curviness
/** #override */
MultiNodePathLink.prototype.computePoints = function () {
// Die this Referenz ist hier ist ein geerbter ein go.Link. der bei Links
var startNode = this.fromNode;
var startNodeX = startNode.location.M; // X-Koordinate vom Startknoten
var startNodeY = startNode.location.N; // Y-Koordinate vom Startknoten
var endNode = this.toNode;
var endNodeX = endNode.location.M; // X-Koordinate vom Startknoten
var endNodeY = endNode.location.N; // Y-Koordinate vom Startknoten
var startNodeData = startNode.data; // Das sind die Daten
var endNodeData = endNode.data; // Das sind die Daten
// Die Link-Daten
var linkProperties = this.data;
//** Das Feld Style in [t_diagramlink] bestimmt die Connector-Darstellung **/
// http://www.capri-soft.de/blog/?p=2904
/*
* 1 = Direct Mode=1
* 2 = Auto Routing Mode=2
* 3 = Custom Line Mode=3
* 4 = Tree Vertical Mode=3;TREE=V
* 5 = Tree Horizontal Mode=3;TREE=H
* 6 = Lateral Vertical Mode=3;TREE=LV
* 7 = Lateral Horizontal Mode=3;TREE=LH
* 8 = Orthogonal Square Mode=3;TREE=OS
* 9 = Orthogonal Rounded Mode=3;TREE=OR
*/
var styleStringArray = linkProperties.style.split(";");
var mode = -1;
var tree = '';
for (var i = 0; i < styleStringArray.length; i++) {
if (styleStringArray[i].trim().indexOf('Mode=') > -1) {
mode = styleStringArray[i].replace('Mode=', '');
}
if (styleStringArray[i].trim().indexOf('TREE=') > -1) {
tree = styleStringArray[i].replace('TREE=', '');
}
}
// In der Tabelle t_diagramlinks in der Freitextspalte "Geometry" wird in einem CSV-String
// gespeichert, wie der Link letztendlich auf dem Diagram gezogen wurde
var geometryString = linkProperties.geometry.split(";");
// SX and SY are relative to the centre of the start object
var sx = geometryString[0].replace("SX=", "");
var sy = geometryString[1].replace("SY=", "");
// EX and EY are relative to the centre of the end object
var ex = geometryString[2].replace("EX=", "");
var ey = geometryString[3].replace("EY=", "");
// SX=-67;SY=-43;EX=-12;EY=-40;EDGE=3;$LLB=;
// LLT=;LMT=;LMB=CX=30:CY=13:OX=11:OY=-2:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=1:DIR=0:ROT=0;
// LRT=;LRB=;IRHS=;ILHS=;
// EDGE ranges in value from 1-4, with 1=Top, 2=Right, 3=Bottom, 4=Left (Outgoing Point of the Start Object)
var edge = geometryString[4].replace("EDGE=", "");
// Hier beginnt das Custom-Routing
this.clearPoints();
if (linkProperties.start_object_name == 'System Verification Test Reports' && linkProperties.end_object_name == 'System test specifications') {
var test = 'irrsinn';
}
// Hier werden die Wege definiert für das gecustomizte Link Routing
// Geht der Link nach oben oder unten wird die Y-Koordinate des Startknotens genutzt (Weil Orthogonales Routing)
var startConnX = null;
var startConnY = null;
if (edge == 1) { // Ecke oben
startConnX = Math.abs(startNodeX) + Math.abs((startNode.actualBounds.width / 2) + new Number(sx));
startConnY = Math.abs(startNodeY);
}
else if (edge == 3) { // Ecke unten
startConnX = Math.abs(startNodeX) + Math.abs((startNode.actualBounds.width / 2) + new Number(sx));
startConnY = Math.abs(startNodeY) + new Number(startNode.actualBounds.height);
}
else if (edge == 2) { // Ecke rechts
startConnX = Math.abs(startNodeX) + startNode.actualBounds.width;
startConnY = Math.abs(startNodeY) + Math.abs((startNode.actualBounds.height / 2) - new Number(sy));
}
else if (edge == 4) { // Ecke links
startConnX = new Number(Math.abs(startNodeX));
startConnY = Math.round(startNodeY) + Math.round((startNode.actualBounds.height / 2) - new Number(sy));
}
else {
alert('Die Edge konnte nicht entdeckt werden! Ist der Geometry String in der EAP Datei richtig?');
}
this.addPoint(new go.Point(Math.round(startConnX), Math.round(startConnY)));
// Abfrage: Gibt es einen letzten Path Punkt?
var lastPathPunkt=false;
var lastPathPunktX, lastPathPunktY;
if (mode != 1)
{
// Routing über die Zwischenwege
if (typeof linkProperties.conn_path !== "undefined" && linkProperties.conn_path !== "") {
var splittedArray = linkProperties.conn_path.split(";");
if (splittedArray.length > 1) {
// Hier ist mindestens ein Wert vorhanden da auch der erste mit Semikolon abgeschlossen wird im Path vom EA
for (var i = 0; i < splittedArray.length - 1; i++) {
var einMittelPunkt = splittedArray[i];
var mittelPunktArray = einMittelPunkt.split(":");
this.addPoint(new go.Point(Math.abs(new Number(mittelPunktArray[0])), Math.abs(new Number(mittelPunktArray[1]))))
lastPathPunktX = Math.abs(new Number(mittelPunktArray[0]));
lastPathPunktY = Math.abs(new Number(mittelPunktArray[1]));
lastPathPunkt = true;
}
}
}
}
// Wenn es keinen Pfad gab,muss der letzte Punkt mit dem Startknoten identisch sein
if (lastPathPunkt == false) {
lastPathPunktX = Math.abs(Math.round(startConnX));
lastPathPunktY = Math.abs(Math.round(startConnY));
}
// End-Routing
// Der Endpunkt in EA in Document Coordinates
var endConnX = Math.abs(endNodeX) + Math.abs((endNode.actualBounds.width / 2) + new Number(ex));
var endConnY = Math.abs(endNodeY) + Math.abs((endNode.actualBounds.height / 2) - new Number(ey));
// Spezialfälle bei horizontalen und vertikalen Linien:
if (endConnX == lastPathPunktX) {
// Es liegt eine vertikale Gerade (z.B. von oben nach unten) vor
this.addPoint(new go.Point(Math.round(lastPathPunktX), Math.round(lastPathPunktY)));
this.addPoint(new go.Point(Math.round(endConnX), Math.round(endConnY)));
} else if (endConnY == lastPathPunktY) {
// Es liegt eine horizontale Gerade (z.B. von rechts nach links) vor
this.addPoint(new go.Point(Math.round(lastPathPunktX), Math.round(lastPathPunktY)));
this.addPoint(new go.Point(Math.round(endConnX), Math.round(endConnY)));
} else {
// Es ist keine Gerade sondern ein Gerade, die mit y=m*x+b beschrieben werden kann
// 1.) Gerade zwischen Start- und Endpunkt ermittelnhn
// Ye-Ys
// m = ----- b=Ys-m*Xs oder b=Ye-m*Xe
// Xe-Xs
var m = (endConnY - lastPathPunktY) / (endConnX - lastPathPunktX);
var b = lastPathPunktY - m * lastPathPunktX
// 2.) Ermittlung der horizontalen und vertikalen Geraden des Rechteckes und dem Schnittpunkten
// Die Geraden, die das Rechteck definieren:
var rY1 = endNodeY;
var rY2 = endNodeY + endNode.actualBounds.height;
var rX1 = endNodeX;
var rX2 = endNodeX + endNode.actualBounds.width;
// (rX1, rY1) -zu-> (rX2, rY2)
// Horizontale Geraden:
// y - b
// x = -----
// m
var lengthToPoint = [];
var sX1 = (rY1 - b) / m; // S1(sX1|rY1)
if (sX1 >= rX1 && sX1 <= rX2) {
// Der Schnittpunkt sX1 ist am Rechteck
// Distanz: d=SQRT((y2-y1)^2+(x2-x1)^2)
var dS1 = Math.sqrt(Math.pow(rY1 - lastPathPunktY, 2) + Math.pow(sX1 - lastPathPunktX, 2));
lengthToPoint.push({
"distanz": dS1,
"x": sX1,
"y": rY1
});
}
var sX2 = (rY2 - b) / m; // S2(sX2|rY2)
if (sX2 >= rX1 && sX2 <= rX2) {
// Der Schnittpunkt sX2 ist am Rechteck
// Distanz: d=SQRT((y2-y1)^2+(x2-x1)^2)
var dS2 = Math.sqrt(Math.pow(rY2 - lastPathPunktY, 2) + Math.pow(sX2 - lastPathPunktX, 2));
lengthToPoint.push({
"distanz": dS2,
"x": sX2,
"y": rY2
});
}
// Vertikale Geraden:
//
// y = m*x + b
var sY1 = m * rX1 + b; // S3(rX1|sY1)
if (sY1 >= rY1 && sY1 <= rY2) {
// Der Schnittpunkt sY1 ist am Rechteck
// Distanz: d=SQRT((y2-y1)^2+(x2-x1)^2)
var dS3 = Math.sqrt(Math.pow(sY1 - lastPathPunktY, 2) + Math.pow(rX1 - lastPathPunktX, 2));
lengthToPoint.push({
"distanz": dS3,
"x": rX1,
"y": sY1
});
}
var sY2 = m * rX2 + b; // S4(rX2|sY2)
if (sY2 >= rY1 && sY2 <= rY2) {
// Der Schnittpunkt sY2 ist am Rechteck
// Distanz: d=SQRT((y2-y1)^2+(x2-x1)^2)
var dS4 = Math.sqrt(Math.pow(sY2 - lastPathPunktY, 2) + Math.pow(rX2 - lastPathPunktX, 2));
lengthToPoint.push({
"distanz": dS4,
"x": rX2,
"y": sY2
});
}
// Sortiere alle Punkte nach Distanz - der mit der kleinsten Entfernung isses
lengthToPoint.sort(function (a, b) { return a.distanz - b.distanz });
if (lengthToPoint.length > 0)
{
this.addPoint(new go.Point(Math.round(lengthToPoint[0].x), Math.round(lengthToPoint[0].y)));
}
else
{
this.addPoint(new go.Point(Math.round(lastPathPunktX), Math.round(lastPathPunktY)));
}
}
return true;
};
// end MultiNodePathLink class

Related

Revit: Cannot create dimensions when ViewSection is rotated

I'm trying to set dimension to the elements in a AssemblyInstance. The code operates with coordinates from the first element.
AssemblyInstance ass; //is found and is not null
ViewSection vsec = RevitAuxilaries.CreateAssemblyViewSection(uiapp, ass,
AssemblyDetailViewOrientation.ElevationFront, ElementId.InvalidElementId, 25);
//UIApplication,
AssemblyInstance, AssemblyDetailViewOrientation, TemplateId, scale // created
BoundingBoxXYZ bbox1 = ass.get_BoundingBox(uiapp.ActiveUIDocument.ActiveView);
XYZ ptmid = (bbox1.Max + bbox1.Min) * 0.5;
Element cropboxelm = RevitAuxilaries.GetViewCropBox(uiapp, vsec); //finds CropBox element,
//found
BoundingBoxXYZ bcropbox = vsec.CropBox;
XYZ center = new XYZ(ptmid.X, ptmid.Y, 0.5 * (bcropbox.Max.Z + bcropbox.Min.Z));
Line axis = Line.CreateBound(center, center + XYZ.BasisZ);
RevitAuxilaries.RotateElement2(uiapp, cropboxelm, axis, 0.6981); // UIApplication,, Element,
Line, angle// created
double dw = RevitAuxilaries.GetDimensionFromElement(uiapp, fi, Dimensions.enWidth); //found dw
// = 3.937
ptleft = new XYZ(31.501, -23.3878, 32.4803);
ptrght = new XYZ(31.501 + dw * Math.Cos(0.6981), -23.3878 + dw * Math.Sin(0.6981), 32.4803);
Line ln = RevitAuxilaries.CreateLineFromPoints(uiapp, ptleft, ptrght); //created
ReferenceArray refarr = new ReferenceArray();
refarr.Append(ln.GetEndPointReference(0));
refarr.Append(ln.GetEndPointReference(1));
Dimension dim = null;
using (Transaction trans = new Transaction(uiapp.ActiveUIDocument.Document, "CreADim"))
{
trans.Start();
dim = uiapp.ActiveUIDocument.Document.Create.NewDimension(viewsec, line, refarr);
if (!issame)
{
try
{ dim.ValueOverride = Convert.ToInt32(UnitUtils.Convert(dim.Value.Value,
UnitTypeId.Feet, UnitTypeId.Millimeters)).ToString(); }
catch { }
}
trans.Commit();
uiapp.ActiveUIDocument.RefreshActiveView();
}
ERROR: The direction of dimension is invalid
Error in function checkDir, line 939
What is here wrong?
Have you tried to create the exact same dimension in the exact same context manually through the end user interface? Does that complete as expected? If not, what error message does that generate? If yes, you can analyse the resulting model, its elements and their properties using RevitLookup and possibly discover some required settings that can be added to your API approach.

Diplay loading indicator for FullCalendar and Vuejs

is there a way to display a loading indicator on FullCalendar 5 and VueJS ?
https://fullcalendar.io/docs/loading
I checked the documentation and it states it works via AJAX request, but don't say anything about other technologies.
Is there any way to have something similar and easy to implement ?
Regards.
When i get my agendas entries, i update my calendars with the events.
entreesAgenda() {
// Ajout des evenements dans le(s) calendrier(s)
let entrees = []
if (this.entreesAgenda) {
for (let i = 0; i < this.utilisateurs.length; i++) {
entrees = this.data.items.filter(
(f) => f.idPersonnel === this.utilisateurs[i].id
)
// Mise à jour des businessHours
this.$refs.utilisateur[i]
.getApi()
.setOption('businessHours', this.calculBusinessHours(entrees))
// Insertion des entrees d'agenda dans le calendrier
for (let m = 0; m < entrees.length; m++) {
this.$refs.utilisateur[i].getApi().addEvent(entrees[m])
}
}
}
this.updateView()
},
But i don't know how to use the loading.

How to verify PKCS#7 signature in PHP

I have a digital signature (encrypted format PKCS#7) needs to be verified So I have tried by using different PHP core methods(openssl_verify, openssl_pkcs7_verify and even tried by external library such as phpseclib But nothing worked :(
I get a signature along with some extra params through by this link..
http://URL?sign={'param':{'Id':'XXXXXXX','lang':'EN','Rc':'00','Email': 'test#yahoo.com'’},'signature':'DFERVgYJKoZIhvcNAQcCoIIFRzCCBUMCAQExCzAJBgUrDg
MCGgUAMIGCBgkqhkiG9w0BBwGgdQRzeyJEYXRlIjoidG9fY2hhcihzeXNkYXRlJ0RETU1ZWVlZJykgIiwiSWQiOiJ
VMDExODg3NyIsIklkaW9tYSI6IkNBUyIsIk51bUVtcCI6IlUwM23D4DEE3dSi...'}
PHP code - returns always 0(false) instead of 1(true).
$JSONDATA = str_replace("'", '"', #$_GET["sign"]);
$data = json_decode($JSONDATA, true);
$this->email = $data['param']["EMAIL"];
$this->signature = $data['signature'];
$this->signature_base64 = base64_decode($this->signature);
$this->dataencoded = json_encode($data['param']);
//SOLUTION 1 (By using phpseclib) but didnt work..
$rsa = $this->phpseclib->load();
$keysize = 2048;
$rsa->setPrivateKeyFormat(CRYPT_RSA_PRIVATE_FORMAT_PKCS8);
$rsa->setPublicKeyFormat(CRYPT_RSA_PUBLIC_FORMAT_PKCS1);
$d = $rsa->createKey($keysize);
$Kver = $d['publickey'];
$KSign = $d['privatekey'];
// Signing
$rsa->loadKey($KSign);
$rsa->setSignatureMode(CRYPT_RSA_ENCRYPTION_PKCS1);
$rsa->setHash('sha256');
$signature = $rsa->sign($this->dataencoded);
$signedHS = base64_encode($signature);
// Verification
$rsa->loadKey($Kver);
$status = $rsa->verify($this->dataencoded, $this->firma_base64); // getting an error on this line Message: Invalid signature
var_dump($status); // reutrn false
//SOLUTION 2 (By using code php methods)
// obtener la clave pública desde el certifiado y prepararla
$orignal_parse = parse_url("https://example.com", PHP_URL_HOST);
$get = stream_context_create(array("ssl" => array("capture_peer_cert" => TRUE)));
$read = stream_socket_client("ssl://".$orignal_parse.":443", $errno, $errstr, 30, STREAM_CLIENT_CONNECT, $get);
$cert = stream_context_get_params($read);
$certinfo = openssl_x509_parse($cert['options']['ssl']['peer_certificate']);
openssl_x509_export($cert["options"]["ssl"]["peer_certificate"],$cert_key);
$pubkeyid = openssl_pkey_get_public($cert_key);
$dataencoded = json_encode($data['param']);
echo $ok = openssl_x509_check_private_key($cert_key,$this->firma_base64); // return nothing
echo $ok1 = openssl_verify($dataencoded, $this->firma_base64, $pubkeyid, OPENSSL_ALGO_SHA256); // returns 0
echo $ok2 = openssl_verify($dataencoded, $this->firma_base64, $pubkeyid, OPENSSL_ALGO_SHA512); // returns 0
echo $ok3 = openssl_verify($dataencoded, $this->firma_base64, $pubkeyid, OPENSSL_ALGO_SHA256); // returns 0
echo $ok4 = openssl_verify($dataencoded, $this->firma, $pubkeyid, OPENSSL_ALGO_SHA512); // returns 0
Java code - (this code works and returns true)
private boolean verifySignautre(String frm) throws NetinfException, IOException, CMSException,
CertificateException, OperatorCreationException, Exception {
Security.addProvider(new BouncyCastleProvider());
//we extract the containers that make up the signature and the keystore used to sign included in the same signature.
CMSSignedData signedData = new CMSSignedData(Base64.decode(frm.getBytes()));
SignerInformationStore signers = signedData.getSignerInfos();
Store certStore = signedData.getCertificates();
Collection c = signers.getSigners();
Iterator it = c.iterator();
while (it.hasNext()) {
//retrieve the certificate with the recipient's id.
SignerInformation signerInfo = (SignerInformation) it.next();
Collection certCollection = certStore.getMatches(signerInfo.getSID());
Iterator certIt = certCollection.iterator();
X509CertificateHolder signerCertificateHolder = (X509CertificateHolder) certIt.next();
//create the container to validate signature.
ContentVerifierProvider contentVerifierProvider = new BcRSAContentVerifierProviderBuilder(new
DefaultDigestAlgorithmIdentifierFinder()).build(signerCertificateHolder);
//valid signature and then certificate validity date
try{
X509Certificate signedcert = new
JcaX509CertificateConverter().setProvider("BC").getCertificate(signerCertificateHolder);
signedcert.checkValidity();
signedcert.verify(signedcert.getPublicKey());
return true;
}catch(Exception e){
return false;
}
}
I simply need to convert this Java code into PHP. However, as you can see above that I tried different approaches but none of them worked.
Please support me to find the solution.
your support would be higly appreciated

Flex 3 - Enable context menu for text object under a transparent PNG

Here is the situation. In my app I have an overlay layer that is composed of a transparent PNG. I have replaced the hitarea for the png with a 1x1 image using the following code:
[Bindable]
[Embed(source = "/assets/1x1image.png")]
private var onexonebitmapClass:Class;
private function loadCompleteHandler(event:Event):void
{
// Create the bitmap
var onexonebitmap:BitmapData = new onexonebitmapClass().bitmapData;
var bitmap:Bitmap;
bitmap = event.target.content as Bitmap;
bitmap.smoothing = true;
var _hitarea:Sprite = createHitArea(onexonebitmap, 1);
var rect:flash.geom.Rectangle = _box.toFlexRectangle(sprite.width, sprite.height);
var drawnBox:Sprite = new FlexSprite();
bitmap.width = rect.width;
bitmap.height = rect.height;
bitmap.x = -loader.width / 2;
bitmap.y = -loader.height / 2;
bitmap.alpha = _alpha;
_hitarea.alpha = 0;
drawnBox.x = rect.x + rect.width / 2;
drawnBox.y = rect.y + rect.height / 2;
// Add the bitmap as a child to the drawnBox
drawnBox.addChild(bitmap);
// Rotate the object.
drawnBox.rotation = _rotation;
// Add the drawnBox to the sprite
sprite.addChild(drawnBox);
// Set the hitarea to drawnBox
drawnBox.hitArea = _hitarea;
}
private function createHitArea(bitmapData:BitmapData, grainSize:uint = 1):Sprite
{
var _hitarea:Sprite = new Sprite();
_hitarea.graphics.beginFill(0x900000, 1.0);
for (var x:uint = 0; x < bitmapData.width; x += grainSize)
{
for (var y:uint = grainSize; y < bitmapData.height; y += grainSize)
{
if (x <= bitmapData.width && y <= bitmapData.height && bitmapData.getPixel(x, y) != 0)
{
_hitarea.graphics.drawRect(x, y, grainSize, grainSize);
}
}
}
_hitarea.graphics.endFill();
return _hitarea;
}
This is based off the work done here: Creating a hitarea for PNG Image with transparent (alpha) regions in Flex
Using the above code I am able to basically ignore the overlay layer for all mouse events (click, double click, move, etc.) However, I am unable to capture the right click (context menu) event for items that are beneath the overlay.
For instance I have a spell check component that checks the spelling on any textitem and like most other spell checkers if the word is incorrect or not in the dictionary underlines the word in red and if you right click on it would give you a list of suggestions in the contextmenu. This is working great when the text box is not under the overlay, but if the text box is under the overlay I get nothing back.
If anyone can give me some pointers on how to capture the right click event on a textItem that is under a transparent png that would be great.

WinRT UI element to follow another during manipulation

I am trying to use manipulation on a UI element (rectangle) and can rotate and translate it without problem. What I would like to achieve is to make another UI element (ellipse for example) to follow the first (rectangle).
If I apply the same transform group -that I used for rectangle- to ellipse, during translation manipulation it works fine but during rotation ellipse does not follow rectangle.
I think I somehow must find a suitable composite transform center Point to provide to ellipse but I can not figure out how.
Here is corresponding sample code.
public MainPage()
{
this.InitializeComponent();
rectMy.ManipulationMode = ManipulationModes.None | ManipulationModes.TranslateX | ManipulationModes.TranslateY | ManipulationModes.Rotate;
rectMy.ManipulationStarted += rectMy_ManipulationStarted;
rectMy.ManipulationDelta += rectMy_ManipulationDelta;
rectMy.ManipulationCompleted += rectMy_ManipulationCompleted;
transformGroup.Children.Add(previousTransform);
transformGroup.Children.Add(compositeTransform);
rectMy.RenderTransform = transformGroup;
}
void rectMy_ManipulationCompleted(object sender, ManipulationCompletedRoutedEventArgs e)
{
e.Handled = true;
}
void rectMy_ManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
previousTransform.Matrix = transformGroup.Value;
Point center = previousTransform.TransformPoint(new Point(rectMy.Width / 2, rectMy.Height / 2));
compositeTransform.CenterX = center.X;
compositeTransform.CenterY = center.Y;
compositeTransform.Rotation = e.Delta.Rotation;
compositeTransform.ScaleX = compositeTransform.ScaleY = e.Delta.Scale;
compositeTransform.TranslateX = e.Delta.Translation.X;
compositeTransform.TranslateY = e.Delta.Translation.Y;
}
void rectMy_ManipulationStarted(object sender, ManipulationStartedRoutedEventArgs e)
{
e.Handled = true;
}
OK. I understood it better now and found the solution. It is all about the center point of the composite transform (as I initially guessed). For center of the ellipse, I had to feed the center of rectangle. However the coordinate needed to be given relative to the ellipse. In my case ellipse is at the right upper corner of the rectangle so below is what I have given as composite transform center.
Point centerE = previousTransformE.TransformPoint(new Point(-rectMy.Width / 2 + ellipseMy.Width / 2, rectMy.Height / 2 + ellipseMy.Height / 2));
For rectangle, the center point for composite transform was:
Point center = previousTransform.TransformPoint(new Point(rectMy.Width / 2, rectMy.Height / 2));
Stackoverflow does not allow me to post an image to better visualize the things. Sorry!
The whole code:
previousTransform.Matrix = transformGroup.Value;
previousTransformE.Matrix = transformGroupE.Value;
Point center = previousTransform.TransformPoint(new Point(rectMy.Width / 2, rectMy.Height / 2));
compositeTransform.CenterX = center.X;
compositeTransform.CenterY = center.Y;
compositeTransform.Rotation = e.Delta.Rotation;
compositeTransform.TranslateX = e.Delta.Translation.X;
compositeTransform.TranslateY = e.Delta.Translation.Y;
Point centerE = previousTransformE.TransformPoint(new Point(-rectMy.Width / 2 + ellipseMy.Width / 2, rectMy.Height / 2 + ellipseMy.Height / 2));
compositeTransformE.CenterX = centerE.X;
compositeTransformE.CenterY = centerE.Y;
compositeTransformE.Rotation = e.Delta.Rotation;
compositeTransformE.TranslateX = e.Delta.Translation.X;
compositeTransformE.TranslateY = e.Delta.Translation.Y;