I have a Image on a stage and I'm drawing over it and erasing it. Using the following method
http://jsfiddle.net/lannymcnie/ZNYPD/
Now i want to take a clone of the user drawings but its not working. I tried
var drawingAreaClone = drawingArea.clone(true);
but its not working .
Is there a way to clone it. kindly Help
The demo you posted doesn't clear the stage, but instead clears the graphics each frame. If you clone the shape, it will have no instructions.
#Catalin's answer is right if you just need a visual -- but another option is to use the Graphics store() method instead of clearing the graphics: http://createjs.com/docs/easeljs/classes/Graphics.html#method_store
Essentially this method just sets an internal pointer to where the graphics draw from. By storing after each draw, only future draw calls are made. This will have the same application as the demo you posted, only you can call unstore() later to reset the Graphics to draw from the beginning. If you clone it this way, it should work.
var erase = document.getElementById("toggle").checked;
wrapper.updateCache(erase?"destination-out":"source-over");
//drawing.graphics.clear();
drawing.graphics.store(); // Don't draw anything before this point
// Later
var newGraphics = drawing.graphics.clone();
newGraphics.unstore(); // Might be redundant
var shape = new Shape(newGraphics);
Note that cloning Graphics doesn't recreate the entire graphics tree, and simply clones the array that stores the instructions. Modifying the individual instructions after the fact would impact any clones of that Graphics object.
Hope that helps.
If the drawn line shape is a child of the drawingAreaClone then the clone should work properly.
However, if for some reason you can't make it work with that, you can take a snapshot of the canvas and save it as an img type varaible like this:
var snapshot = new Image();
snapshot.src = canvas.toDataURL();
Also, if you don't want to snapshot the whole canvas, after you saved the initial image, you can limit the drawing area to a rectangle with these extra instructions:
var ctx = canvas.getContext('2d');
canvas.width = snapshot.width;
canvas.height = snapshot.height;
ctx.drawImage(snapshot, rectangle.x, rectangle.y, rectangle.width, rectangle.height, 0, 0, rectangle.width, rectangle.height);
snapshot.src = canvas.toDataURL();
Related
I am trying to delete canvas. For that I have used below 2lines of code
const context = canvas.getContext('2d');
context.clearRect(0, 0, canvas.width, canvas.height);**
This is just clearling the canvas, but on click on that canvas its reshowing same data.
The canvas is not deleted, but temporary hidden on clear.
Konva is a scene graph for your canvas. The scene has nodes, such as Layer, Group, Shape.
You don't need to clear the canvas element manually. You just need to destroy all nodes from the scene. Like this:
layer.destroyChildren();
layer.draw();
I tried this too.Thats also not working.
I tried below code and thats working fine.
**var stage_main = this.$refs.stage.getStage();
stage_main.clear();
Object.keys(this.canvasElements).forEach((key) => {
this.canvasElements[key]= [];
});**
I also need to clear canvaselement object along with stage clear
I'm making a Vulkan (LWJGL) app that will draw directly to framebuffer images. No vertex or fragment shader, just some compute shader to build an image, which I blit directly into a FrameBuffer image. It's working fine.
Now I try to render an UI (imgui) on top of that, but I think I have some synchronization trouble : the UI is blinking sometimes (the blit is maybe not yet finished ?). The render would be perfect for some seconds, but sooner or later, the UI disappears during maybe one frame. I only build the render pass once, I don't record the command buffers more than once for this test, so I think the problem happen when I record the command buffer.
Maybe the problem is that the vkCmdBlitImage is still running when the UI starts to draw ?
I tried to add some barriers, between the blit and the UI render, but the problem is still here.
I think I really miss something, so I'm requesting your help. I put here the code I think relevant, but don't hesitate to ask more code.
The record of the CommandBuffer:
for (int i = 0; i < commandBuffers.size(); i++)
{
RenderCommandBuffer commandBuffer = commandBuffers.get(i);
ImageView imageView = configuration.imageViewManager.getImageViews().get(i);
commandBuffer.startCommand();
copyPixelBufferToFB(commandBuffer, imageView);
commandBuffer.startRenderPass();
imGui.drawFrame(commandBuffer.getVkCommandBuffer());
commandBuffer.endRenderPass();
commandBuffer.endCommand();
}
The copyPixelBufferToFB method:
Extent2D extent = context.swapChainManager.getExtent();
ImageView dstImageView = context.imageViewManager.getImageViews().get(commandBuffer.id);
// Intend to blit from this image, set the layout accordingly
// Prepare transfer from Image to Frambuffer
ImageBarrier barrier = new ImageBarrier(VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT);
barrier.addImageBarrier(srcImage, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
VK_ACCESS_TRANSFER_READ_BIT);
barrier.addImageBarrier(dstImageView.getImageId(), dstImageView.getImageFormat(), 1, 0,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 0, VK_ACCESS_TRANSFER_WRITE_BIT);
barrier.execute(commandBuffer.getVkCommandBuffer());
long bltSrcImage = srcImage.getId();
long bltDstImage = dstImageView.getImageId();
VkImageBlit.Buffer region = VkImageBlit.calloc(1);
region.srcSubresource().aspectMask(VK_IMAGE_ASPECT_COLOR_BIT);
region.srcSubresource().mipLevel(0);
region.srcSubresource().baseArrayLayer(0);
region.srcSubresource().layerCount(1);
region.srcOffsets(0).x(0);
region.srcOffsets(0).y(0);
region.srcOffsets(0).z(0);
region.srcOffsets(1).x(srcImage.getWidth());
region.srcOffsets(1).y(srcImage.getHeight());
region.srcOffsets(1).z(1);
region.dstSubresource().aspectMask(VK_IMAGE_ASPECT_COLOR_BIT);
region.dstSubresource().mipLevel(0);
region.dstSubresource().baseArrayLayer(0);
region.dstSubresource().layerCount(1);
region.dstOffsets(0).x(0);
region.dstOffsets(0).y(0);
region.dstOffsets(0).z(0);
region.dstOffsets(1).x(extent.getWidth());
region.dstOffsets(1).y(extent.getHeight());
region.dstOffsets(1).z(1);
vkCmdBlitImage(commandBuffer.getVkCommandBuffer(), bltSrcImage,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, bltDstImage,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, region, VK_FILTER_NEAREST);
// Change layout again before render pass.
ImageBarrier barrierEnd = new ImageBarrier(VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT);
barrierEnd.addImageBarrier(srcImage, VK_IMAGE_LAYOUT_GENERAL, VK_ACCESS_SHADER_WRITE_BIT);
barrierEnd.addImageBarrier(dstImageView.getImageId(), dstImageView.getImageFormat(), 1,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
VK_ACCESS_TRANSFER_WRITE_BIT, VK_ACCESS_SHADER_WRITE_BIT);
barrierEnd.execute(commandBuffer.getVkCommandBuffer());
How I submit the CommandBuffer, something like that:
bWaitSemaphores.put(waitSemaphore.getId());
bwaitStage.put(VK_PIPELINE_STAGE_TRANSFER_BIT);
pCommandBuffers.put(commandBuffer.getVkCommandBuffer());
submitInfo = VkSubmitInfo.calloc();
submitInfo.sType(VK_STRUCTURE_TYPE_SUBMIT_INFO);
submitInfo.waitSemaphoreCount(bWaitSemaphores.size());
submitInfo.pWaitSemaphores(bWaitSemaphores);
submitInfo.pWaitDstStageMask(bwaitStage);
submitInfo.pCommandBuffers(pCommandBuffers);
[...]
vkQueueSubmit(queue, submitInfo, VK_NULL_HANDLE);
The creation of the RenderPass:
VkAttachmentDescription colorAttachment = VkAttachmentDescription.calloc();
colorAttachment.format(context.swapChainManager.getColorDomain().getColorFormat());
colorAttachment.samples(VK_SAMPLE_COUNT_1_BIT);
colorAttachment.loadOp(VK_ATTACHMENT_LOAD_OP_LOAD);
colorAttachment.storeOp(VK_ATTACHMENT_STORE_OP_STORE);
colorAttachment.stencilLoadOp(VK_ATTACHMENT_LOAD_OP_DONT_CARE);
colorAttachment.stencilStoreOp(VK_ATTACHMENT_STORE_OP_DONT_CARE);
colorAttachment.initialLayout(VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
colorAttachment.finalLayout(VK_IMAGE_LAYOUT_PRESENT_SRC_KHR);
VkAttachmentReference.Buffer colorAttachmentRef = VkAttachmentReference.calloc(1);
colorAttachmentRef.attachment(0);
colorAttachmentRef.layout(VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
VkSubpassDescription.Buffer subpass = VkSubpassDescription.calloc(1);
subpass.pipelineBindPoint(VK_PIPELINE_BIND_POINT_GRAPHICS);
subpass.colorAttachmentCount(1);
subpass.pColorAttachments(colorAttachmentRef);
VkSubpassDependency.Buffer dependency = VkSubpassDependency.calloc(1);
dependency.srcSubpass(VK_SUBPASS_EXTERNAL);
dependency.dstSubpass(0);
dependency.srcStageMask(VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT);
dependency.dstStageMask(VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT);
dependency.srcAccessMask(0);
dependency.dstAccessMask(
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT);
int attachmentCount = 1;
VkAttachmentDescription.Buffer attachments = VkAttachmentDescription
.calloc(attachmentCount);
attachments.put(colorAttachment);
attachments.flip();
VkRenderPassCreateInfo renderPassInfo = VkRenderPassCreateInfo.calloc();
renderPassInfo.sType(VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO);
renderPassInfo.pAttachments(attachments);
renderPassInfo.pSubpasses(subpass);
renderPassInfo.pDependencies(dependency);
long[] aRenderPass = new long[1];
if (vkCreateRenderPass(logicalDevice.getVkDevice(), renderPassInfo, null,
aRenderPass) != VK_SUCCESS)
{
throw new AssertionError("Failed to create render pass!");
}
renderPass = aRenderPass[0];
Edit 12/12/2018:
The imGui.drawFrame() in the code above is now irrelevant, I changed the gui from ImGui to Nuklear, and completely rewrote the gui pipeline. Unfortunately, the same problem appears.
However, I'm still using a vkCmdDrawIndexed to draw it.
Docu says:
Timeline The Timeline class synchronizes multiple tweens and allows
them to be controlled as a group.
but there is no example how to use it. If I create a Timeline with
var tl = createjs.Timeline();
none of the shapes are rendered anymore.
Timeline is a great feature in TweenMax and I like to use it in the canvas too.
Creating a Timeline shouldn't affect the rendering of the Shapes - could you provide some more code or explain further what you're trying to do?
The usage of Timeline is quite straight forward:
var timeline = new createjs.Timeline(); //create the Timeline
timeline.addTween(tween, tween2); // add some tweens
timeline.setPaused(true); // pause all tweens
timeline.setPosition(300); // set position on all tweens ...
However if you're more used to GSAP you could just use GSAP in combination with EaselJS/CreateJS - they work great together.
I have an example:
//First Tween on rav4 object
var ravScaleTween = new createjs.Tween.get(rav4)
.wait(350)
.to({scaleX:1, scaleY:1, x:ravEndPoint.x, y:ravEndPoint.y}, 1500, createjs.Ease.quadOut);
//Second Tween on rav4 object
var ravAlphaTween = new createjs.Tween.get(rav4)
.wait(350)
.to({alpha:1}, 400);
//Create Timeline class
var ravTimeLine = new createjs.Timeline();
ravTimeLine.addTween(ravScaleTween,ravAlphaTween);
rav4 object starts to scale and go move to x and y position (1500 milliseconds) as alpha of rav4 object fades up at 400 milliseconds
I have several images, which are Symbols (movieClip) with Alpha parameter.
And i'm creating dynamic textfield from AS3 to be able change text every few seconds.
Problem is that everything worked good till i converted images to MovieClips. But after that my textfields are not visible.
Here is the code:
textFormat = new TextFormat();
textfield = new TextField();
textFormat.font = new customFonts().fontName;
textFormat.size = 16;
textFormat.align = "center";
textFormat.color = 0xFFFFFF;
textfield.defaultTextFormat = textFormat;
textfield.embedFonts = true;
textfield.width = 480;
textfield.height = 95;
textfield.x = 185;
textfield.y = 22;
textfield.wordWrap = true;
addChild (textfield);
So the question is - how to bring this textfield to the top so it would be visible?
You're adding your Text field after the movie clips are being initiated. Think of it as a layer, the text field is at the bottom layer, hence they will not be seen.
I would look at the container class
The Container class is an abstract base class for components that controls the layout characteristics of child components. You do not create an instance of Container in an application. Instead, you create an instance of one of Container's subclasses, such as Canvas or HBox.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/mx/core/Container.html
You should be able to change what is displayed.
EDIT:
Anytime you add a clip it is added on top by default.
You should also look into Z-Index.
If you are coding with Flash Develop then it can get tricky, whilst using Flash Adobe CC can make your life so much easier!
Sorry if it's not that much of an answer.
I'm newbie in using ogre3D and I need help on a certain point!
I'm trying a library mixing ogre3D engine and qtml :
http://advancingusability.wordpress.com/2013/08/14/qmlogre-is-now-a-library/
this library works fine when you want to draw some object and rotate or translate these objects already initialise in a first step.
void initialize(){
// we only want to initialize once
disconnect(this, &ExampleApp::beforeRendering, this, &ExampleApp::initializeOgre);
// start up Ogre
m_ogreEngine = new OgreEngine(this);
m_root = m_ogreEngine->startEngine();
m_ogreEngine->setupResources();
m_ogreEngine->activateOgreContext();
//draw a small cube
new DebugDrawer(m_sceneManager, 0.5f);
DrawCube(100,100,100);
DebugDrawer::getSingleton().build();
m_ogreEngine->doneOgreContext();
emit(ogreInitialized());
}
but If you want to draw or change the scene after this initialisation step it is problematic!
In fact in Ogre3D only (without the qtogre library), you have to use a frameListener
which will connect the rendering thread and allow a repaint of your scene.
But here, we have two ContextOpengl: one for qt and the other one for Ogre.
So If you try to put the common part of code :
createScene();
createFrameListener();
// La Boucle de rendu
m_root->startRendering();
//createScene();
while(true)
{
Ogre::WindowEventUtilities::messagePump();
if(pRenderWindow->isClosed())
std::cout<<"pRenderWindow close"<<std::endl;
if(!m_root->renderOneFrame())
std::cout<<"root renderOneFrame"<<std::endl;
}
the app will freeze! I know that startRendering is a render loop itself, so the loop below never gets executed.
But I don't know where to put those line or how to correct this part!
I've also try to add a background buffer and to swap them :
void OgreEngine::updateOgreContext()
{
glPopAttrib();
glPopClientAttrib();
m_qtContext->functions()->glUseProgram(0);
m_qtContext->doneCurrent();
delete m_qtContext;
m_BackgroundContext= QOpenGLContext::currentContext();
// create a new shared OpenGL context to be used exclusively by Ogre
m_BackgroundContext = new QOpenGLContext();
m_BackgroundContext->setFormat(m_quickWindow->requestedFormat());
m_BackgroundContext->setShareContext(m_qtContext);
m_BackgroundContext->create();
m_BackgroundContext->swapBuffers(m_quickWindow);
//m_ogreContext->makeCurrent(m_quickWindow);
}
but i've also the same error:
OGRE EXCEPTION(7:InternalErrorException): Cannot create GL vertex buffer in GLHardwareVertexBuffer::GLHardwareVertexBuffer at Bureau/bibliotheques/ogre_src_v1-8-1/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 46)
I'm very stuck!
I don't know what to do?
Thanks!