Merge pull request #6980 from geoffw0/unusedqhelp

C++: Remove old and unused qhelp files
This commit is contained in:
Geoffrey White
2021-10-28 10:55:31 +01:00
committed by GitHub
11 changed files with 0 additions and 341 deletions

View File

@@ -1,13 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>about General Class-Level Information</p>
<!--TOC-->
</overview>
</qhelp>

View File

@@ -1,13 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>about Architecture</p>
<!--TOC-->
</overview>
</qhelp>

View File

@@ -1,16 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>about best practices</p>
<!--TOC-->
</overview>
</qhelp>

View File

@@ -1,42 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the number of lines of text that have been added, deleted
or modified in files below this location in the tree.
</p>
<p>
Code churn is known to be a good (if not the best) predictor of defects in a
code component (see e.g. [Nagappan] or [Khoshgoftaar]). The intuition is that
files, packages or projects that have experienced a disproportionately high
amount of churn for the amount of code involved may have been harder to write,
and are thus likely to contain more bugs.
</p>
</overview>
<recommendation>
<p>
It is a fact of life that some code is going to be changed more than the rest,
and little can be done to change this. However, bearing in mind code churn's
effectiveness as a defect predictor, code that has been repeatedly changed
should be subjected to vigorous testing and code review.
</p>
</recommendation>
<references>
<li>
N. Nagappan et al. <em>Change Bursts as Defect Predictors</em>. In Proceedings of the 21st IEEE International Symposium on Software Reliability Engineering, 2010.
</li>
<li>
T. M. Khoshgoftaar and R. M. Szabo. <em>Improving code churn predictions during the system test and maintenance phases</em>. In ICSM '94, 1994, pp. 58-67.
</li>
</references>
</qhelp>

View File

@@ -1,6 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<include src="HChurn.qhelp" />
</qhelp>

View File

@@ -1,6 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<include src="HChurn.qhelp" />
</qhelp>

View File

@@ -1,48 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the number of different authors (by examining the
version control history)
for files below this location in the tree. (This is a better version
of the metric that counts the number of different authors using Javadoc
tags.)
</p>
<p>
Files that have been changed by a large number of different authors are
by definition the product of many minds. New authors working on a file
may be less familiar with the design and implementation of the code than
the original authors, which can be a potential source of bugs. Furthermore,
code that has been worked on by many people, if not carefully maintained,
often ends up lacking conceptual integrity. For both of these reasons, any
code that has been worked on by an unusually high number of different people
merits careful inspection in code reviews.
</p>
</overview>
<recommendation>
<p>
There is clearly no way to reduce the number of authors that have worked
on a file - it is impossible to rewrite history. However, files highlighted
by this metric should be given special attention in a code review, and may
ultimately be good candidates for refactoring/rewriting by an individual,
experienced developer.
</p>
</recommendation>
<references>
<li>
F. P. Brooks Jr. <em>The Mythical Man-Month</em>, Chapter 4. Addison-Wesley, 1974.
</li>
</references>
</qhelp>

View File

@@ -1,30 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the total number of file-level changes made to files
below this location in the tree. For an individual file, it measures the
number of commits that have affected that file. For a directory of files, it
measures the sum of the file-level changes for each of the files in the
directory.
</p>
<p>
For example, suppose we have a directory containing two files, A and B. If the
number of file-level changes to A is <code>100</code>, and the number of
file-level changes to B is <code>80</code>, then the total number of
file-level changes to the directory is <code>180</code>. Note that this is
likely to be different (in some cases very different) from the number of
commits that affected any file in the directory, since more than one file can
be changed by a single commit. (Note what would happen if we performed
<code>80</code> commits on A and B, followed by another <code>20</code>
commits on A alone - the total number of file-level changes would be
<code>180</code>, but the number of commits involved would be
<code>100</code>.)
</p>
</overview>
</qhelp>

View File

@@ -1,51 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the average number of co-committed files for the files
below this location in the tree.
</p>
<p>
A co-committed file is one that is committed at the same time as a given file.
For instance, if you commit files A, B and C together, then B and C would be
the co-committed files of A for that commit. The value of the metric for an
individual file is the average number of such co-committed files over all
commits. The value of the metric for a directory is the aggregation of these
averages - for instance, if we are using <code>max</code> as our aggregation
function, the value would be the maximum of the average number of co-commits
over all files in the directory.
</p>
<p>
An unusually high value for this metric may indicate that the file in question
is too tightly-coupled to other files, and it is difficult to change it in
isolation. Alternatively, it may just be an indication that you commit lots of
unrelated changes at the same time.
</p>
</overview>
<recommendation>
<p>
Examine the file in question to see what the problem is.
</p>
<ul>
<li>
If the file is too tightly coupled, it will have high values for its afferent
and/or efferent coupling metrics, and you should apply the advice given there.
</li>
<li>
If the file is not tightly coupled, but you find that you are committing lots
of unrelated changes at the same time, then you may want to revisit your commit
practices.
</li>
</ul>
</recommendation>
</qhelp>

View File

@@ -1,53 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the number of file re-commits that have occurred below
this location in the tree. A re-commit is taken to mean a commit to a file
that was touched less than five days ago.
</p>
<p>
In a system that is being developed using a controlled change process (where
changes are not committed until they are in some sense 'complete'), re-commits
can be (but are not always) an indication that an initial change was not
successful and had to be revisited within a short time period. The intuition
is that the original change may have been difficult to get right, and hence
the code in the file may be more than usually defect-prone. The concept is
somewhat similar to that of 'change bursts', as described in [Nagappan].
</p>
</overview>
<recommendation>
<p>
High numbers of re-commits can be addressed on two levels: preventative and
corrective.
</p>
<ul>
<li>
On the preventative side, a high number of re-commits may be an indication
that your code review process needs an overhaul.
</li>
<li>
On the corrective side, code that has experienced a high number of re-commits
should be vigorously code reviewed and tested.
</li>
</ul>
</recommendation>
<references>
<li>
N. Nagappan et al. <em>Change Bursts as Defect Predictors</em>. In Proceedings of the 21st IEEE International Symposium on Software Reliability Engineering, 2010.
</li>
</references>
</qhelp>

View File

@@ -1,63 +0,0 @@
<!DOCTYPE qhelp PUBLIC
"-//Semmle//qhelp//EN"
"qhelp.dtd">
<qhelp>
<overview>
<p>
This metric measures the number of recent changes to files that have occurred
below this location in the tree. A recent change is taken to mean a change
that has occurred in the last <code>180</code> days.
</p>
<p>
All code that has changed a great deal may be more than usually prone to
defects, but this is particularly true of code that has been changing
dramatically in the recent past, because it has not yet had a chance to be
properly field-tested in order to iron out the bugs.
</p>
</overview>
<recommendation>
<p>
There is more than one reason why a file may have been changing a lot
recently:
</p>
<ul>
<li>
The file may be part of a new subsystem that is being written. New code is
always going to change a lot in a short period of time, but it is important
to ensure that it is properly code reviewed and unit tested before integrating
it into a working product.
</li>
<li>
The file may be being heavily refactored. Large refactorings are sometimes
essential, but they are also quite risky. You should write proper regression
tests before starting on a major refactoring, and check that they still pass
once you're done.
</li>
<li>
The same bit of code may be being changed repeatedly because it is difficult
to get right. Aside from vigorous code reviewing and testing, it may be a good
idea to rethink the system design - if something is that hard
to get right (and it's not an inherently difficult concept), you might be making life unnecessarily hard for yourself and
risking introducing insidious defects.
</li>
</ul>
</recommendation>
<references>
<li>
N. Nagappan et al. <em>Change Bursts as Defect Predictors</em>. In Proceedings of the 21st IEEE International Symposium on Software Reliability Engineering, 2010.
</li>
</references>
</qhelp>