WebBrowser control: Block a specific ActiveX control from loading - webbrowser-control

I am hosting the WebBrowser control (using ATL), and I'm looking for a way to block a specific ActiveX control (by CLSID) from loading.
I know ProcessUrlAction can block ActiveX controls, but that's for the entire URL, it doesn't appear to allow you to block a specific ActiveX control by CLSID.
I don't see any specific event interfaces that get notified in MSHTML or the WebBrowser control.
Right now the only solution I can think of is to hook CoCreateInstanceEx and try to block it there.
Any simpler ideas?

ProcessUrlAction can block individual controls as well, you need to check if dwAction=URLACTION_ACTIVEX_RUN, if so then pContext will have the CLSID of the control that is about to run. If it's the one you want to block then set pPolicy to URLPOLICY_DISALLOW and return S_FALSE:
static CLSID CLSID_BAD = {0x00000000, 0x0000, 0x0000, {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}};
STDMETHOD(ProcessUrlAction)(LPCWSTR pwszUrl, DWORD dwAction, BYTE *pPolicy, DWORD cbPolicy, BYTE *pContext, DWORD cbContext, DWORD dwFlags, DWORD dwReserved)
{
if(URLACTION_ACTIVEX_RUN == dwAction && CLSID_BAD == *(CLSID *)pContext)
{
*pPolicy = URLPOLICY_DISALLOW;
return S_FALSE;
}
return INET_E_DEFAULT_ACTION;
}

Related

How can I display a QByteArray in a TextArea?

I'm trying to make an encryption/decryption tool for some configuration files that allows for in-place editing of file contents (via a GUI). The encryption/decryption process is working fine, but I've run into an issue displaying the content of the file for editing.
The file being decrypted contains hex values that represent an integer value (4 bytes), followed by a null terminated string. Example of this file could be the following (int value of 1 and string value of Test).
"010000005465737400"
The decrypted contents are stored into a QByteArray that is then displayed using a TextArea element. The problem is the TextArea stops displaying text when a 0x00 value is reached. I expected it to have displayed a 0 instead of stopping.
Is there a way to display the byte array properly?
In the following, I implement testData() to generate an ArrayBuffer with your 9 bytes. Then I implement byteArraytoHexString() which uses byteLength to iterate through the data and convert it to a hex string:
import QtQuick
import QtQuick.Controls
Page {
Frame {
width: parent.width
TextEdit {
text: arrayBufferToHexString(testData())
}
}
function testData() {
let codes = [
0x01, 0x00, 0x00, 0x00,
0x54, 0x65, 0x73, 0x74,
0x00
];
return codesToArrayByteArray(codes);
}
function codesToArrayByteArray(codes) {
let byteArray = new ArrayBuffer(codes.length);
for (let i = 0; i < codes.length; i++)
byteArray[i] = codes[i];
return byteArray;
}
function arrayBufferToHexString(byteArray) {
let result = [ ];
for (let i = 0; i < byteArray.byteLength; i++)
result.push(byteArray[i].toString(16).padStart(2, "0"));
return result.join("");
}
}
You can Try it Online!
I suspect the issue you're having is you treat the QByteArray as a string not an ArrayBuffer. Definitely, if it's converted to a string, then, the 0x00 byte may be considered as a NUL terminator and will prematurely terminate the data.
Because of your specific use case, you definitely want to avoid any string operation since the raw data must not be treated as a string.
Since you mentioned QByteArray, you would have C++ at your disposal.
I would highly recommend that you implement QByteArray to hex-string and hex-string back to QByteArray in C++ but expose these as Q_INVOKABLE methods in QML, e.g.
Q_INVOKABLE QString byteArrayToHexString(const QByteArray& buffer)
{
return QString::fromUtf8(byteArray.toHex());
}
Q_INVOKABLE QByteArray hexStringToByteArray(const QString& hexString)
{
returun QByteArray::fromHex(hexString.toUtf8());
}
If you do that, then it would enable QML to do the conversions in C++.

More than one EndPoint on USB using PIC16F1455/1459

I am using a PIC16F1455 to collect data and send to a computer.
I used existing code for simple testing, which uses one end point only. Szymons code is my base, which I have expanded a bit.
I would like to use 3 endpoints for my application.
I have tried to set up the system to have 2 endpoints, but my second endpoint is not working.
I can add that
I have my configuration descriptor as below
The host will ask for Report Descriptor 1, but not for the 2nd one
when trying to send something from Endpoint 2, I can only see that UEP2 is owned by the SIE (Serial Interface Engine)
When I try to alter code so UEP1 should use UEP2 hardware, then it does not work. I did this by changing addresses from 01 to 02 and 81 to 82. Doing this with just one will make it work one way only.
Below my code, which I had with 2 endpoints, which gave no error. Just UEP2 does not work. Missing up interface count and message size will give an error. The comments will tell what changes can be done
I guess that if but channels should be the same, then the same configurations for both end points should be fine, only the endpoint number and addresses need change. Am I right?
I also understand, that UEP0 is used by the system and cannot be used for custom messages.
I need some ideas what could be wrong - how to get a second end point to work. I am out of ideas and I find it hard to google much on this. It should ask for both reports when using 2 end points, right?
// Configuration descriptor
const ConfigStruct ConfigurationDescriptor =
{
{
// Configuration descriptor
0x09, // Size of this descriptor in bytes
0x02, // CONFIGURATION descriptor type
0x29, // Total length of data for this cfg LSB // was 29 // 49 for 2 end points
0x00, // Total length of data for this cfg MSB
1,//INTF, // Number of interfaces in this cfg
0x01, // Index value of this configuration
SCON, // Configuration string index
0xA0, // Attributes (USB powered, wake-up))
0x32, // Max power consumption (in 2 mA steps)
},
{
// Generic HID Interface descriptor
0x09, // Size of this descriptor in bytes
0x04, // INTERFACE descriptor type
IHID, // Interface Number //<- I assume that it stays 1 just using UEP2. Cannot start from 2
0x00, // Alternate Setting Number
0x02, // Number of endpoints in this interface
0x03, // Class code (HID)
0x00, // Subclass code
0x00, // Protocol code 0-none, 1-Keyboard, 2- Mouse
0x00, // Interface string index
// Generic Hid Class-Specific descriptor
0x09, // Size of this descriptor in bytes
0x21, // HID descriptor type
0x11, // HID Spec Release Number in BCD format (1.11) LSB
0x01, // HID Spec Release Number in BCD format (1.11) MSB
0x00, // Country Code (0x00 for Not supported)
0x01, // Number of class descriptors
0x22, // Report descriptor type
0x2F, // Report Size LSB (47 bytes)
0x00, // Report Size MSB
// Generic HID Endpoint 1 In
0x07, // Size of this descriptor in bytes
0x05, // ENDPOINT descriptor type
0x81, // Endpoint Address //<----- changing to 82 will not work
0x03, // Attributes (Interrupt)
HRBC, // Max Packet Size LSB
0x00, // Max Packet Size MSB
0x01, // Interval (1 millisecond)
// Generic HID Endpoint 1 Out
0x07, // Size of this descriptor in bytes
0x05, // ENDPOINT descriptor type
0x01, // Endpoint Address //<--------changing on 02 will not work
0x03, // Attributes (Interrupt)
HRBC, // Max Packet Size LSB
0x00, // Max Packet Size MSB
0x01, // Interval (1 millisecond)
I found the mistake to be in the reply of reports - or descriptors. It would only allow to reply for endpoint 0 (endpoint 1 in hw), meaning when the host would ask for next descriptor, it would not get answer.
if((SetupPacket.bmRequestType & 0x1F) != 0x01 || (SetupPacket.wIndex0 != 0x00)) return;
needs to be
if((SetupPacket.bmRequestType & 0x1F) != 0x01 || (SetupPacket.wIndex0 > (InterfaceCount - 1))) return;
Then it works
Next step is that in PC host side, every end point is a connection by itself

STM32 USB Middleware: HID Output Report Transaction Error

I've been having problems configuring my STM32 device to function with receiving HID Interrupt OUT transactions, where PC is host device.
I use the standard STM32 CubeMX provided USB Middleware, with edits to allow 2 endpoints, one OUT and one IN. Along with a customized HID report descriptor:
__ALIGN_BEGIN static uint8_t HID_ReportDesc[HID_REPORT_DESC_SIZE] __ALIGN_END =
{
0x06, 0x00, 0xFF, // Usage Page (Vendor Defined 0xFF00)
0x09, 0x01, // Usage (0x01)
0xA1, 0x01, // Collection (Application)
0x15, 0x00, // Logical Minimum (0)
0x26, 0xFF, 0x00, // Logical Maximum (255)
0x75, 0x08, // Report Size (8)
0x95, 0x40, // Report Count (64)
0x09, 0x02, // Usage (0x02)
0x81, 0x00, // Input (Data,Array,Abs,No Wrap,Linear,Preferred State,No Null Position)
0x95, 0x40, // Report Count (64)
0x09, 0x02, // Usage (0x02)
0x91, 0x00, // Output (Data,Array,Abs,No Wrap,Linear,Preferred State,No Null Position,Non-volatile)
0xC0, // End Collection
};
With the use of the USBD HID provided input report transaction function USBD_HID_SendReport, I can then observe the behavior of my device.
On the PC side, using USBLyzer software, I can confirm that the device is recognized by Windows, and configured accordingly, reporting back the descriptors with no errors. I then use HIDAPI to open the device, and read the reported values. This implementation works as expected, giving me communication of data from my device to the host.
However, there is no provided HID Output Report function, leaving me to implement this myself:
uint8_t USBD_HID_ReceiveReport (USBD_HandleTypeDef *pdev, uint8_t *report, uint16_t len){
USBD_HID_HandleTypeDef *hhid = (USBD_HID_HandleTypeDef*)pdev->pClassData;
if(pdev->dev_state == USBD_STATE_CONFIGURED){
if(hhid->state == HID_IDLE){
hhid->state = HID_BUSY;
USBD_LL_PrepareReceive(pdev, HID_EPOUT_ADDR, report, len);
}
}
return USBD_OK;
}
This function makes use of the same structure as the provided USBD_HID_SendReport function, instead using the USBD_LL_PrepareReceive function to prepare the EPOUT address to receive transaction, and then handling transmission through PCD.
After device initialization, I use my implemented USBD_HID_ReceiveReport function in an endless loop, being called repeatedly. I use hid_write in HIDAPI to transfer data to the device once. On debug, the device iterates through the loop, and enters the PCD endpoint transmission on every call. hid_write seemingly doesn't cause an update to my buffer as expected on the device, and USBlyzer reports a 'transaction error' on the OUT report.
Does anyone know the error in my implementation?

UDP with UWP behaves different

I wrote an app which sends an UDP datagram. If I try it on a Computer with Windows 10 on, it works, the other device (a Commercial one) responds correctly. If I execute the same app on an Windows 10 IoT (Raspberry Pi2), the device does not respond. First tought was a Firewall Problem. Thus I had a look on traffic with WireShark. In both cases the datagrams send over the WLAN, are the same. In case of Windows 10 I see the response from the device, in case of IoT there is no response.
Here is the method I use to send the datagram:
private async void FindDevice()
{
DatagramSocket socket = new DatagramSocket();
socket.MessageReceived += Socket_MessageReceived;
IPAddress ipAddressOfSender;
// device must be in the same network
if (IPAddress.TryParse("192.168.0.1", out ipAddressOfSender))
{
byte[] broadcastIpAddress = ipAddressOfSender.GetAddressBytes();
// Assuming to work with a class C IP address, so broadcast address looks like a.b.c.255
broadcastIpAddress[3] = 255;
using (var stream = await socket.GetOutputStreamAsync(new HostName(new IPAddress(broadcastIpAddress).ToString()), SendingPort.ToString()))
{
using (var writer = new DataWriter(stream))
{
byte[] helloSmartPlugs = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x45, 0x44, 0x49, 0x4d, 0x41,
0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa1, 0xff, 0x5e };
writer.WriteBytes(helloSmartPlugs);
await writer.StoreAsync();
}
}
}
}
In some samples there is also bound the listener port. It doesn't matter, if I do that or not, it still works on Windows, but not IoT. Can that someone explain me? I assumed I Need the listener port.
What could be the reason, the device does not answer in case of IoT? Are there Settings I have to provide with the socket?

CWnd as ActiveX control without .dll or .ocx file in C++?

Dear MFC/ActiveX/COM cracks, I have 'inherited' the source of an old MFC application (originally created with Visual Studio 6)
which builds and runs so far in VS 2010, but has embedded some ActiveX controls as source code, apparently generated by the
Visual Studio wizard (.h and .cpp files, see below);
however not in an own subproject so that a .dll or .ocx file are generated.
Here is the relevant part of the header file of one such control:
#if !defined(AFX_CHARTFX_H__F8A080E0_0647_11D4_92B0_0000E886CDCC__INCLUDED_)
#define AFX_CHARTFX_H__F8A080E0_0647_11D4_92B0_0000E886CDCC__INCLUDED_
#if _MSC_VER >= 1000
#pragma once
#endif // _MSC_VER >= 1000
// Machine generated IDispatch wrapper class(es) created by Microsoft Visual C++
// NOTE: Do not modify the contents of this file. If this class is regenerated by
// Microsoft Visual C++, your modifications will be overwritten.
/////////////////////////////////////////////////////////////////////////////
// CChartfx wrapper class
class CChartfx : public CWnd
{
protected:
DECLARE_DYNCREATE(CChartfx)
public:
CLSID const& GetClsid()
{
static CLSID const clsid
= { 0x8996b0a1, 0xd7be, 0x101b, { 0x86, 0x50, 0x0, 0xaa, 0x0, 0x3a, 0x55, 0x93 } };
return clsid;
}
virtual BOOL Create(LPCTSTR lpszClassName,
LPCTSTR lpszWindowName, DWORD dwStyle,
const RECT& rect,
CWnd* pParentWnd, UINT nID,
CCreateContext* pContext = NULL)
{ return CreateControl(GetClsid(), lpszWindowName, dwStyle, rect, pParentWnd, nID); }
BOOL Create(LPCTSTR lpszWindowName, DWORD dwStyle,
const RECT& rect, CWnd* pParentWnd, UINT nID,
CFile* pPersist = NULL, BOOL bStorage = FALSE,
BSTR bstrLicKey = NULL)
{ return CreateControl(GetClsid(), lpszWindowName, dwStyle, rect, pParentWnd, nID,
pPersist, bStorage, bstrLicKey); }
//rest of header file omitted
Note that this class inherits from CWnd and not some
OCX class. But since all MFC windows are COM components (as I read somewhere) and this is generated
code it should have worked some time ago. I also read that this may be really a migration gap
which occurred somewhere before 2005.
Also note the DECLARE_DYNCREATE, so I think this is late binding, using the IDispatch interface.
So MFC will call a Create() function for us.
The above control is used via aggregation by an encompassing CDialog (also created with VS wizard):
//... analysedlg.h - leading auto-generated stuff omitted
class CAnalyseDlg : public CDialog
{
CChartfx m_chhrtfx;
//... enum for resource ID, DoDataExchange, message map, other members…
}
The dialog, in turn, is embedded in a view class of the application (again, via a member variable) and
created by invoking DoModal() in a menu item event handler.
So, when I click on the corresponding menu item, I get an m_hWnd NULL assertion and when hitting
'retry' in the popped up dialogue, the following stack (excerpt):
mfc100d.dll!COleControlContainer::FillListSitesOrWnds(_AFX_OCC_DIALOG_INFO * pOccDlgInfo) line 925 + 0x23 Bytes C++
mfc100d.dll!COccManager::CreateDlgControls(CWnd * pWndParent, const char * lpszResourceName, _AFX_OCC_DIALOG_INFO * pOccDlgInfo) line 410 C++
mfc100d.dll!CDialog::HandleInitDialog(unsigned int __formal, unsigned int __formal) line 715 + 0x22 Bytes C++
mfc100d.dll!CWnd::OnWndMsg(unsigned int message, unsigned int wParam, long lParam, long * pResult) line 2383 + 0x11 Bytes C++
mfc100d.dll!CWnd::WindowProc(unsigned int message, unsigned int wParam, long lParam) line 2087 + 0x20 Bytes C++
mfc100d.dll!AfxCallWndProc(CWnd * pWnd, HWND__ * hWnd, unsigned int nMsg, unsigned int wParam, long lParam) line 257 + 0x1c Bytes C++
mfc100d.dll!AfxWndProc(HWND__ * hWnd, unsigned int nMsg, unsigned int wParam, long lParam) line 420 C++
mfc100d.dll!AfxWndProcBase(HWND__ * hWnd, unsigned int nMsg, unsigned int wParam, long lParam) line 420 + 0x15 Bytes C++
user32.dll!766162fa()
[missing frames omitted by me]
mfc100d.dll!CWnd::CreateDlgIndirect(const DLGTEMPLATE * lpDialogTemplate, CWnd * pParentWnd, HINSTANCE__ * hInst) line 366 + 0x2a Bytes C++
mfc100d.dll!CDialog::DoModal() line 630 + 0x20 Bytes C++
In the VS debug output there are the following lines:
CoCreateInstance of OLE control {8996B0A1-D7BE-101B-8650-00AA003A5593} failed.
>>> Result code: 0x80040154
>>> Is the control is properly registered?
Warning: Resource items and Win32 Z-order lists are out of sync. Tab order may be not defined well.
So apparently the call to CoCreateInstance had already been done and silently failed without an assertion
which would have been nice to have had. Does anybody know where this is?
My central question is whether it is correct that in this case normally MFC would take care for
registering the control even if it is not in an .dll or .ocx project and that it must have worked
like this in the past. I read somewhere that CreateDlgIndirect with a DialogTemplate is a way of
creating ActiveX controls without needing a .dll or .ocx file. In the above stack, this is called,
too, but not for the ActiveX control, but for the dialogue instead.
Does anyone know more about this issue and how to fix it?
If I do have to manually register the controls, e.g. using regsvr32.exe or via source code,
is there a way without .dll or .ocx files? Or do I have to repackage the ActiveX components in
their own projects (what would be more component-based/modular anyway)?
I hope my problem description is precise enough and I would be very thankful for any answer.
Kind regards.
I've just run into this when using an old ActiveX control. Apparently it was apartment threaded and I was trying to call CoInitializeEx() with COINIT_MULTITHREADED.