top of page
Search

Comprehensive analysis of the 50k Sealed Indictment claim.

Updated: Jul 30, 2020



This is my analysis of the 50k indictment claim.


 

JULY 2020 UPDATE:


Since I have written this article, more information has become available about the sealed cases.


Thanks to the efforts of Indictmentanon (Twitter: @lonegreyhat), who is the source for the Qmap.com sealed cases webpage, we are now able to separate the sealed cases by category, which gives a much more accurate idea about how many are indictments.


Using this new information, Indictmentanon estimates that only ~5k of the currently reported 180k sealed cases are indictments.



The Qanon research team has also released a new chart, in which they have removed the word 'indictments', as well as the claim that the amount is unusual:



This new information deems much of this article unnecessary, but it's still an accurate description of the nuts and bolts of the data and collection methods, for anyone that would like to read about it.


To sum up the situation: Over the past 3 years, there have been ~180k sealed cases filed, of which ~5000 are indictments. There is no proof that this amount of sealed cases/indictments is unusual.


 

Brief summary:

This is the chart that claims there are 50k sealed indictments.



At the bottom of the chart, it states that a normal yearly amount for all 94 districts COMBINED, is 1077.

This is a PACER search from a SINGLE district over only 5 months in 2015. It returned 1509 sealed proceedings.




This shows that the 1077 claim is not accurate. In 2015, a SINGLE district returned more than that in only 5 months.

This is a comparison of the entire state of Colorado from Oct.-Feb. of this year, to the same time period last year.



As you can see, last year had 1199 sealed proceedings, and this year had 1065. There were MORE sealed proceedings in this state last year, before the alleged secret investigation started.

This evidence contradicts the claim that we are seeing astronomical levels of sealed proceedings this year.

Below, I explain the full story in much more detail, and with additional data.


 

Full Analysis:


First we need to clarify some terminology.

A sealed document means that there is NO information available about it, INCLUDING whether or not it is an indictment. What the numbers on the chart represent are sealed 'PROCEEDINGS', which include search warrants, juvenile offenses, criminal complaints, indictments, magistrate judge proceedings, and many other categories.

This is even acknowledged at the bottom of the 50k chart, by this statement:


Labeling the chart "Sealed Indictments" is extremely misleading.

There are not 50k sealed indictments. There are 50k sealed 'proceedings', some of which 'may' contain indictments.


Now that we have clarified that we are discussing sealed 'proceedings', we need to determine if the reported amounts are unusual.

First let's examine their claim that 1077 is a normal amount:

Terminology note - Criminal cases have three major case types: 'criminal', miscellaneous, and magistrate judge.

At the bottom of the 50k chart we find this statement:


This number comes from pg. 17 of the 2009 FJC report, which analyzed data from 2006. It represents the number of sealed criminal cases with the case type 'criminal'. This is the category that contains indictments, and as you can see on the same page, there were 284 sealed indictments within this group in 2006.



http://www.uscourts.gov/sites/default/files/sealed-cases.pdf

However, there is a problem using 1077 as a comparison. When you examine the search methods that the current research team is using, they do not limit their search to only criminal cases with the case type 'criminal'. They inexplicably include ALL criminal case types, including miscellaneous, magistrate judge, and many others.

This is a link to the exact search settings they use, which were provided by the research team on their google drive. You can see that in step 6, they instruct to leave all settings at default.




These are all the different criminal case types that a default search includes.



This means that they are comparing two DIFFERENT sets of data: case type 'criminal' 2006 vs ALL case types 2018.


The research team is comparing a SUBSET from 2006 (1077) to the ENTIRE DATA SET of 2018 (51k). This is a blatantly false comparison.


This is an apples-oranges comparison. To make an apples-apples comparison we have two options:


Gather all the Sealed Criminal Cases (SCC) and either


A: compare case type 'criminal' 2006 vs case type 'criminal' 2018.


OR


B: compare ALL case types 2006 vs ALL case types 2018.


Since the 2018 research team is searching for ALL case types, let's use that, for an apples-apples comparison:


SCC ALL case types 2006 vs SCC ALL case types 2018.


Returning to the FJC report, we find that while it didn't list ALL case types that year, it did list the largest ones. It listed the number of sealed magistrate judge cases (15,177 pg. 21), and sealed miscellaneous cases (8,121 pg. 23). Adding these figures to 1077 brings the total number of sealed criminal cases in 2006 to 24,375.


 

Labeling the chart 'Sealed Indictments', and using 1077 as a comparison, implies that the team either doesn't fully understand the data, or that they are trying to deceive people.


I asked members of the team for an explanation as to why they were using 1077 instead of 25k, and also why they are including ALL case types in their results, instead of just the case type where indictments would be. I received no response on these specific questions.


 

Regardless, now we are down to the 'real' comparison of the data. Total sealed criminal cases 2006 vs 2018. 25k vs 51k.

The problem now is that the 25k number is based on 12 year old data, and there are MANY differences between the methodologies used in obtaining each set of numbers.


The most problematic is that the FJC report ONLY examined cases that had been sealed for at least 2 years.



This essentially nullifies the FJC report as a comparison.


We have NO way of knowing how many sealed proceedings were really filed that year... we only know how many remained sealed after 2+ years (25k). The amount filed could have been 75k for all we know.


 

The truth is that the study isn't even necessary... we can run recent history in PACER, using the EXACT methods that the research team is using.

In fact, one of the first things I did was ask one of the team members why they were using numbers from 2006, when they can run recent history in PACER. Their answer was that recent PACER history would be a BETTER comparison, but they didn't have the time or resources to devote to it.



So right away, we have direct acknowledgment from the founder of the research team, that they are using outdated data to compare their numbers to.


I decided to search PACER history myself, and compare to the numbers they have reported. If we are experiencing astronomical levels of sealed proceedings, as they claim, it should quickly become evident.



Colorado:

This is the entire state of Colorado, observed from Oct. of 2017, when the supposed secret investigation started, to July 2018, and that same time period for the two previous years:





As you can see, not only are the current levels not dramatically higher, the previous year had MORE sealed proceedings than we see this year.



California:

The most active district out of all 94, is California - Central. This is a comparison of that district for the month of January, between 2017 (before the alleged investigation), and 2018. The amounts are basically the same.


(These are the .pdf files from PACER. The 2018 file is from the research team's google drive.)





D.C.:


D.C. is discussed often, due to it's location. For this comparison, we find over twice as many sealed proceedings were filed last year. Even in 2016, while Obama was president, there were almost twice as many as this year.





These are manual counts of sealed proceedings in these districts, for the time periods listed:

Connecticut: 4/1/2018 - 4/30/2018: 110

4/1/2017 - 4/30/2018: 230

Iowa, Northern:

10/30/2017 - 2/28/2018: 95

10/30/2016 - 2/28/2017: 89

10/30/2015 - 2/28/2016: 69

Alaska:

10/30/2017 - 2/28/2018: 135

10/30/2016 - 2/28/2017: 93

10/30/2015 - 2/28/2016: 107

Colorado: 6/1/2018 - 6/30/2018: 93

6/1/2017 - 6/30/2017: 127

6/1/2016 - 6/30/2016: 130

Again, we can see that there is absolutely NO evidence that dramatically higher numbers of sealed proceedings are being filed this year... in fact, some districts had MORE sealed proceedings in previous years.

As of the writing of this post, the largest PACER analysis that I'm aware of is this 15 district analysis that examined cases for the past 9 years. In it, we observe a steady increase in sealed proceedings over time.

Note: This analysis was linked by Praying Medic, who for those that dont know, is a very respected member of the Q movement. I have linked both Praying Medic's tweet, and the blog where the analysis came from, at the bottom of this article. In this spreadsheet, some of the percentage calculations are stated incorrectly, but the case totals are accurate. Specifically, the 'percent change' cells are really 'percent of'. Remove the first digit, and they become 'percent change'.


You can see from these numbers, that even with only 15 of the 94 districts analyzed, the number of sealed proceedings was already hitting totals over 8,000 for the past 5 years. By 2016 it was up to 10,000... and again, this is for only 15 of the 94 districts.

Using this spreadsheet, if we examine the time period when the secret investigation supposedly started (Oct. 2017), to the ending date of the analysis (Feb. 2018), we see a 26% increase compared to that same time period last year.


However, if we examine the year to year percent change of this time period going back to 2009, we see that between Oct.'09-Feb.'10 and Oct.'10-Feb.'11, it was 24%. This means that the increase of activity this year is only 2% larger than an earlier recorded increase... which means it is essentially a normal increase.

We should also note that cases often become unsealed over time, for a number of reasons. This means that we would expect the most recent case counts to drop, effectively making the difference between this year and last year even less.

Bottom line:

Disregarding the issues with the research team using misleading language/numbers, we have established that the 2009 study is not a good comparison due to the age of the data, and the methods used to collect it. Even the research team has acknowledged this.


While a full 94 district analysis from PACER has not been performed, the partial analyses we have, ALL suggest that there is not an unusual amount of sealed proceedings being filed this year.

This is how I would summarize it:

Until a full Oct. 2016 - Oct. 2017 PACER analysis is performed, we can not definitively state that the current numbers of sealed proceedings 'ARE' or 'ARE NOT' dramatically higher than normal -- HOWEVER -- the evidence we have overwhelmingly shows that they are NOT dramatically higher.


 

If you are wondering why a full 94 district analysis hasn't been done, it's because of the cost. I would estimate it would take around $400 to pull it. I've already spent around $150 pulling reports off PACER, and that is all I'm putting into it.


Regardless, the data we do have includes some of the most active districts, including all districts in the state of California, as well as D.C. These districts are the ones many people bring up in discussions, due to their high population and prominent locations.


The data PROVES that there are not dramatically higher numbers of sealed proceedings being filed this year, in California or D.C.


Even if someone wanted to be a stickler about technically not having all 94 districts, the fact that we can prove that California and D.C. are not experiencing unusual amounts of sealed proceedings, should be a HUGE red flag that something is wrong with their claim.


Additionally:


If you look at early tweets from the team, you will see that they were not using a full 94 district analysis to make the claim that the numbers were unusual.


In this tweet, one of the founders of the chart says that 24 districts has produced an 'unprecedented' amount of sealed indictments.



When someone asked if anyone had run all 94 districts, he responded with this:




So if a partial analysis was sufficient then, it's sufficient now. Apples-apples comparisons to recent history show that no unusual activity is taking place in the court system.


Keep in mind that the only reason he thought the numbers were unprecedented, was because at the time, he didn't understand the data. He was looking at sealed PROCEEDINGS, and incorrectly thought they were sealed indictments.


 

Estimating the 2018 amount:


I've seen some Qanon members use the 2009 FJC report to estimate the current number of indictments, and I want to briefly address this.


Because of the confusion with case types, they use an incorrect percentage value of 26% in their estimate calculations, and they conclude that the estimated total would be ~13k sealed indictments. This is incorrect.


The correct percentage value they should be using is 1.2%, and this is the math:


The 2009 FJC report listed 24,375 sealed criminal cases, with 284 sealed indictments... that gives us a percentage of 1.2%.


If we apply 1.2% to the current 51,181 total, we get an estimate of 614 sealed indictments.


 

Below I discuss the research team, the reasoning behind their data collection methods, the flaws in their reasoning, and their rebuttal.

The Research team:

I generally don't like to discuss individuals, but I think it is somewhat necessary here. I've heard people describe the research team as 'lawyers and paralegals', and that perception potentially gives more credibility to their results. I certainly perceived them as experts when I started my research.


Initially I had quite a few examples that I was going to discuss here, but I have decided to only include two. The purpose of this is just to make clear that neither the research team, or myself, are legal experts.






Their rebuttal:

Even though one of the team members previously admitted that recent PACER history would be a BETTER comparison than data from 2006, the spokesperson of the group refused to accept PACER data from 2016.

This is that person's rebuttal, but first I need to explain the reasoning behind the team's analysis.

A case can be filed 'sealed' or 'unsealed'. Over time, that status may change, depending on the situation. For example, a case that is filed 'unsealed' may eventually become 'sealed' due to certain conditions. This would be referred to as a retroactively sealed case.

As I explained earlier, it is not possible to know if a 'sealed' document is an indictment. The only thing we can state with certainty, is that any indictment related to the secret investigation, would be 'sealed' AT THE TIME OF FILING.

Therefore, the ENTIRE PURPOSE of their data collection is to capture all cases that were 'sealed' at the time of filing, and extrapolate that any indictments related to the secret investigation would be included in that group.

(Note: There is a critical flaw to their methodology, that I will discuss later.)

When I asked the spokesperson why they would not accept PACER results from 2016, their answer was that it's because PACER searches from 2016 and earlier would include retroactively sealed cases. This could cause the number of sealed cases in previous years to be inflated, and skew the numbers.

The problem with this is that the 2009 report they are using ALSO includes retroactively sealed cases. The report clearly states that all of the cases they studied were 2-3 years old. This means that any case that shows up as 'sealed' in that report, may have initially been filed 'unsealed'.

Both the 2009 report, 'AND' PACER searches from 2016, include retroactively sealed cases. With this being the case, there is ZERO reason not to accept recent PACER data.


When I pointed this out to the spokesperson, and again asked for a valid reason as to why they would not accept data from 2016, I got no response.

Cases that become unsealed:

Just like cases can become sealed over time, they can also become unsealed. The current research team does NOT remove unsealed cases from their chart, even though many of those cases become unsealed.

Coincidentally, the team that produced the 2009 FJC report stated that they restricted their study to 2-3 year old cases SPECIFICALLY because they did not want to include cases that were only sealed for a short amount of time, like the current research team is doing (pg. 1). This is even more of a reason as to why that study is a bad comparison.

I decided to see how much of a factor this is, by re-examining a district they had analyzed 8 months prior, and determining how many 'sealed' cases had become 'unsealed'. In the monthly report of 1 district, I found that 43 cases had become 'unsealed' after 8 months.



By not removing cases that become unsealed in the first year, the difference between previous years is artificially inflated. They are comparing data in which cases that become unsealed ARE in the count, to data in which cases that became unsealed are NOT in the count.


Critically flawed methodology:

Due to the issue of cases becoming retroactively sealed or unsealed, the only way to know if a case was 'sealed' at the time of filing, is to monitor the cases in real time as they come in, or at the very least run searches each day.

Here, the research team is only running one search at the end of each month. When they do this, they only know the state of the cases ON THE DAY THEY RUN THE SEARCH. They have NO way of knowing if a case they count as 'sealed' was filed 'sealed' or 'unsealed'... they only know what state it is in on the day of their search.

For example, if a case was filed 'unsealed' on the 3rd, and became 'sealed' on the 19th, they would report it as a case that was 'sealed' at the time of filing, even though it wasn't.

Here is a conversation I had with someone on the research team, where I asked them about this issue, and they admitted that their data is not immune to this variable.



This means that their investigation is effectively meaningless, as they have no way of knowing if any of the cases they examine were sealed at the time of filing... they only know that they were sealed on the day they ran the report.


Coincidentally, this was also the specific, and only, reason the spokesperson used for not accepting 2016 PACER results. Apparently that person didn't understand that their data is also vulnerable to this issue.

 

What would they need to do to 'properly' achieve what they are trying to accomplish? Monitor the incoming cases in real time, or at the very least perform daily searches of all 94 districts. As far as I know, this is the only way to truly determine whether a case was 'sealed' or 'unsealed' at the time of filing.

However, even if they did that, they would have nothing to compare it to, because no one has ever performed daily searches on any previous year. The only thing they could do is take this years results, and start comparing them to future years.

 

Quality control:

Finally, I wanted to address the fact that no one is verifying the research team's data collection, and there are errors throughout. To illustrate some of these examples, I am using the public excel spreadsheet of the research team's data, that I will link at the bottom of this article.

Here are some of the most noticeable mistakes:

1. This is the report for California - Central, for the month of December 2017, from their google drive. To determine the amount of sealed cases, you can download this .pdf and search for "*sealed*" (no quotes), or you can manually count them.


The total is 430, however, they miscounted and reported 815 on their chart.



2. Observe the reported monthly sealed case totals for California - Central from the excel spreadsheet. They are normally 400-500, but the month of February inexplicably only has 12.


This amount stands out like a sore thumb, yet no one on the research team noticed it.

It happened because for this month they incorrectly selected to only retrieve criminal cases with the case type 'criminal', which returns far fewer results than their normal method. Read the last page of this document, and you will see that they selected 'cr' under case type, as opposed to their normal method of selecting 'ALL' case types.




Coincidentally, this is the search setting they SHOULD have used,to properly compare the 1077 number, as case type 'criminal' is the category where sealed indictments would be found.


3. The 50k chart for the month of August was constructed incorrectly. The month of May was doubled into the June column, which threw the rest of the columns off. This has since been corrected, but the incorrect copy circulated widely, and many people commented on the oddness of the same numbers being reported for May and June. The incorrect chart is still widely circulated and discussed online.



4. Illinois - Central Mar. 2018. The report lists 45, yet they counted 182, and include the incorrect count in their chart numbers.





Bottom line:


This is somewhat sloppy, non-scientific analysis, whose methods are flawed, and whose results are being compared to 12 year old data, that was obtained with different methods. The claim doesn't hold up to scrutiny.


Conclusion:


After all that, I want to state that based on my discussions with the various team members, I did not get a sense that they were intentionally being deceptive. I think they are well intentioned, and doing what they feel is the right thing to do. I see them doing a lot on twitter to raise awareness of underage trafficking/assault, and that is a good thing.


At the end of the day, however, facts are facts. Bad data does not help any particular cause, so i felt this needed to be addressed.


I welcome any input regarding mistakes with the data/reasoning I have presented. Leave comments here, or by email, and I will make updates if necessary.

Email: wmerthon@mail.com


Note: If you leave a comment below, I wont be able to reply directly to it. If I make a reply, it will be a new comment, and you will have to check back to see.

If you think the information contained in this article is useful, then share it everywhere.


References:

1. Several people around the internet have posted research that got me started on this path, and whose data/reasoning I may have intentionally or unintentionally referenced. Most notably is Reddit user Raptor-Facts, and her analysis that she posted on Reddit. Her report is seen all over the web, and rightfully so, as it is a great write-up.


2. This is the Praying Medic tweet that I referenced above, where he links to the 15 district analysis.. Note that he is quoting a result that was found to be a mistake. The 175% result that he quotes, was revised to 26%.



3. This is the blog that contains the 15 district analysis. Note that he had several mistakes in his calculations when he wrote the article. He has since corrected the numbers in the article, but the text of the article largely remains as it was initially written... which was under the assumption of his previous calculations, which had incorrectly shown dramatically higher results. Also, in his updated paragraph at the bottom, he incorrectly calculated the estimated sealed indictments. The number should be 614, as I discussed above.



4. This is the research team's google drive, where they keep the monthly reports that are used to maintain the chart, along with some other information.



5. This is an article from the Daily Dot, which has been circulated widely on the internet. It does an excellent job of explaining the issues with the claim.


6. Lastly I want to give credit to the person that maintains the excel spreadsheet for the current research team. Twitter user @lonegreyhat. His presentation is straightforward and fair. For example, he uses 25k as a comparison, instead of 1077. While the 25k number still has flaws, it's far more accurate than the 1077 that the chart team uses.


He is also keeping track of sealed cases that become unsealed, but due to the automation of the data collection, and the poor numbering system that the courts use on sealed cases, those figures are not always accurate.

Regardless, @lonegreyhat is presenting the data in a non biased, objective fashion. There is an enormous amount of information contained in his spreadsheet, and it is all sourced EXTREMELY well. I want to make it clear that even though I have concerns about the rest of the research team, I am not implying that lonegreyhat does not know what he is doing, or that he is doing a bad job.



13,816 views42 comments

Recent Posts

See All
bottom of page