So if you hit up the ‘goog, bing’d it, and everyone that asks about this error:
The Active Directory Domain Controllers required to find the selected objects in the following domains are not available:
Ensure the AD Domain Controllers are available and try to select the object again
…gets told that it’s their DNS. But if you’re here, I assume you have checked and you don’t have a DNS issue. It actually could be your security settings. This error happens when you try and add a user from the trusted domain to a security group in the trusting domain.
Most of the time this happens to people with a one-way trust. (For instance setting up an ESAE, Tier 0 forest, or “red forest”). And sure… DNS could cause this, but I have seen too many forums with people walking away without an answer. In fact the moderators of those forums will see the question, say it is DNS, then mark their own answers as final without confirming with the person who asked, and close the question. It is frustrating.
Here’s the other thing that will cause this. The GPO setting:
Computer Configuration > Administrative Templates > System > RemoteProcedure Call > “Enable RPC Endpoint Mapper Client Authentication”
When set to enabled (which is a STIG finding that will soon be removed from 2008R2 and 2012+) it tells the computer account that any RPC communication must be mutually authenticated. Which… is sort of a problem when you only have a one-way trust. You can’t just turn this to “Not configured” either, as the setting is tattoo’d. You have to disable it. Then, you have to reboot.
As a bit of trivia, if you had created a one-way trust that was external this wouldn’t have happened. In fact, something much more bizarre will (I plan to write a blog about this as well). You can create the trust, add users, and even log on with them. But when logged on with an account from the trusting forest, you will have a frozen start menu. Administrators can’t run things as administrator. PsGetSid will tell you that you have a broken trust when clearly you don’t.
Strange stuff. Two issues with the same root cause. Hope this helps someone out there.
In episode 10 we’ll talk about network congestion. While the problem isn’t normally on your server, I’ll show you how to determine what caused the issue if it is. This episode also talks about how to spot and identify network latency. We cover perfmon and how to interpret the counters, pathping, TCPView, IO Meter as a network tool, and a handful of other tips and tricks.
In episode 9 we’ll load a machine up to nearly 100% processor utilization and talk about how to track it in perfmon (because I know you already know how to do this with task manager). We also cover context switching, DPC’s, Interrupts, and the Processor Queue Length. After this we cover the process counters and thresholds for tracking down whodunn’it.
In episode 8 we’ll cover how to identify and troubleshoot memory leaks in the kernel virtual address space. I’ll demo a system crash/lockup and then show what it looks like in perfmon, then expand on how to identify the offending pooltag in the kernel using poolmon.exe. We specifically focus on the nonpaged pool, the paged pool, and system PTE’s as well as the pertinent counters that correlate with thresholds of each “bucket”.
Episode 7, Windows Memory Architecture. Performance Series.
In episode 7 we’ll start talking about “Fake Memory” as I’ve been eluding to for several episodes. We cover the microkernel memory architecture of 32 and 64 bit operating systems, the Virtual Memory Manager, and why user mode processes sometimes crash when they exceed it. We cover this in a whiteboarding session and then move into a demo of an actual application crash, and show what a typical crash looks like in Perfmon.
In episode 6 we’ll finish off the RAM and Pagefile discussion, then move on to leaks of just RAM. We cover the differences between working set and private bytes counters in the process object. Then we crash our test box again by setting a process lose to go suck up all the Available Mbytes counter and then investigate what that looks like both live and in a capture.
Episode 5, Tracking RAM and Page File Exhaustion. Performance Series.
In episode 5 we start the discussion of memory leaks. This is a topic that will span several issues. Memory can mean the virtual address space of a process or the kernel itself, or it could mean you ran out or RAM. Or, you ran out of RAM and Page File. How do you tell which one? How do you figure out which process stole all your precious memory. Specifically in this episode we’ll be talking about what it looks like when you run out of both RAM and Page File (the most common type of process memory leak). We also cover the difference between committed bytes of a process verses private bytes. Finally, we cover concepts like the Virtual Memory Manager of Windows. Trimming of the working set. What the page file really is, etc. The next episode will continue with a discussion more related to the working set.
In episode 4 we cover the first of the actual counter sets used to identify problems, and the methodology to do so. Identifying a disk bottleneck involves more than just latency counters, it also includes ruling out whether your server or PC was actually the culprit. So what we will do in this episode is load up a computer with a lot of disk activity, then additionally load up the hypervisor that runs that computer and compare the differences. Finally, we will do some process analysis and determine who the bad guy was.
Episode 3, Loading and Interpreting Counters. Performance Series.
In episode 3 we finish up with the intro to perfmon, such as how to actually load the data collector sets we captured. We then cover some of the basics of interpreting the counters (scaling graphs, scaling counters, looking for patterns, and zooming into the problem).
In episode 2 we continue working with some of the more advanced features of Perfmon, such as setting up data collector sets, managing Perfmon through the command line with logman, and integration with task scheduler.