These one-line Unix-like shell commands help find the directories consuming the most hard drive space.
This command lists the top 10 largest directories in the specified path.
This is useful to HPC where disk quota as seen by quota -s or similar indicates it’s time to clean up some files.
du -hd1 /path/to/check 2>/dev/null | sort -rh | head
While disk size quota is often the main concern, there is often also a quota on the number of files (inodes) that can be owned by a user.
To find the directories with the most files, use this command:
find /path/to/check -type f -printf '%h\n' 2>/dev/null | sort | uniq -c | sort -rn | head
Graphical terminal disk usage program “ncdu” is a widely-available Ncurses based program showing the largest files.
NCDU is more user-friendly than the plain text method above.
NCDU allows deleting files interactively.
When using on WSL, specify the desired drive like (for C: in this example):
When using CMake’s add_custom_command and add_custom_target on Windows, if the CMAKE_EXECUTABLE_SUFFIX (which is typically .exe) is not included as part of the command or target, it can lead to the custom target is rebuilt every time the project is rebuilt, even if the command is up to date.
This happens because CMake does not exactly match the output file of the custom command, since the .exe suffix was missing on Windows, and therefore CMake does not properly track the existence of a target’s dependencies.
As a result, CMake assumes that the target is always out of date and needs to be rebuilt.
To avoid this issue, ensure that the output of your custom command includes the CMAKE_EXECUTABLE_SUFFIX.
For example, if generating an executable with add_custom_command, specify the output file with the correct suffix:
By including the CMAKE_EXECUTABLE_SUFFIX in both the OUTPUT and DEPENDS sections, CMake will correctly recognize the target as an executable and will only rebuild it when necessary, thus avoiding unnecessary rebuilds.
Check the desired website’s SSL certificate with a service like Qualys SSL Labs
to see if the certificate is valid and properly configured.
If the certificate is valid but you still encounter SSL errors, it’s possible that the public WiFi network is interfering with the SSL connection.
Also try using command line web programs to see if there are any SSL errors or warnings in the output.
Examples:
An example of the curl output when even HTTP connections are interfered with:
curl -v http://neverssl.com
* Host neverssl.com:80 was resolved.
* IPv6: (none)
* IPv4: x.x.x.x
* Trying x.x.x.x:80...
* connect to x.x.x.x port 80 from 0.0.0.0 port 58197 failed: Timed out
* Failed to connect to neverssl.com port 80 after 21104 ms: Could not connect to server
* closing connection #0
curl: (28) Failed to connect to neverssl.com port 80 after 21104 ms: Could not connect to server
Since the introduction by President Electronics of the
patented
Auto-Squelch feature in CB radios in 1998-1999, auto-squelch (a.k.a. ASC or ASQ) for AM and FM has become widespread in CB radios.
The auto-squelch feature allows one to detect weaker signals than would be practical with a manual squelch when driving around or scanning channels because the background noise is automatically accounted for.
Auto-squelch is not implemented for SSB, which remains on manual threshold setting of AGC-based squelch.
Algorithms
exist
for voice-activated squelch for SSB, but they are generally not yet implemented in CB (or amateur) radios.
A key innovation was including RF signal strength along with the out-of-band baseband noise level commonly used in FM auto-squelch to determine the squelch threshold.
This allows the squelch threshold to be set lower when a strong signal is present, which allows weaker signals to be heard.
The auto-squelch algorithm is not perfect, however, and is often fooled into closing by overmodulated AM signals, which can cause the squelch to open and close rapidly, making the signal difficult to understand.
The movement by prominent CB radio technicians to avoid overmodulation is to be lauded generally for not fruitlessly disrupting adjacent channel users, and especially to preserve the functioning of auto-squelch.
The false closing of squelch even for on-channel strong overmodulated signals is due to the large amount of out-of-band distortion products generated by overmodulation, which can cause the auto-squelch algorithm to incorrectly determine that this is an undesired strong signal present and close the squelch.
In practice, one might use auto-squelch for general driving around and scanning channels, but switch to manual squelch when trying to listen to a known channel with a strong signal that is overmodulated.
For reception of weak signals, switching to the manual squelch and leaving the squelch open is generally optimal.
These characteristics are why it’s so important when buying a CB radio to get one that also has the “NRC” noise reduction algorithms, which make listening far less fatiguing on any voice mode AM / FM / SSB.
It’s a shame that so many are blindly buying the “Cobra 29” style legacy radios when for the same price and often less money, they could get a CB radio with NRC and other features like scanning that make the CB radio experience much more enjoyable and productive.
AFCI protection comes in the form of circuit breakers, outlets, and dead-front outlets that protect downstream circuits from arc faults, which can lead to fires.
Today’s AFCI
protection
is typically Combination AFCI with GFCI, which detects series or parallel arc faults and provides ground fault protection.
AFCI are designed to detect and interrupt electrical arcs that can lead to fires, but they can sometimes be sensitive to the electromagnetic interference (EMI) generated by radio transmissions.
Amateur radio and CB radio operators have
reported
instances where their
radio transmissions
cause AFCI (Arc Fault Circuit Interrupter) circuit protection to trip.
Many early reported false trips were due to RF getting into the circuit and AFCI protection.
As noted in the linked articles, a first solution may be replacing the AFCI with a newer model that has improved false-trip reduction.
AFCI can also be sensitive to current variation implicit in single-sideband (SSB) transmissions, which can cause the circuit breaker to trip if the rapidly varying current is interpreted as a series arc fault.
We have observed that once the current draw on an AFCI circuit is above a certain threshold (say 5-10 amps), the AFCI becomes sensitive to the rapidly varying current draw of an SSB transmitter and falsely trips–even in the transmitter is only drawing say 1-2 amps at 120 volts.
This can happen from having an electric heater or other high-current device on the same circuit as the radio, which can cause the AFCI to trip when the radio is transmitting at the same time the high-current device is drawing power.
This can be verified by using a battery power supply for the transmitter, to help ensure it’s not coupled RF causing the issue.
If the AFCI is new, the only solution may be to put the steady high-current loads and SSB transmitter on separate circuits.
AFCI protection should not be removed, as it has proven effective in preventing fires.
Docker images are useful for reproducibility and ease of setup and for software binary distribution on platforms not natively available on GitHub Actions runner images.
While one can setup a
custom Docker image,
it’s often possible to simply use an existing official image from
Docker Hub.
This example GitHub Actions workflow uses the Ubuntu 20.04 image to build a C++ binary with the GNU C++ compiler.
For APT operations, the “-y” option is necessary.
Environment variable DEBIAN_FRONTEND is set to “noninteractive” to avoid interactive prompts for certain operations despite “-y”.
Don’t use “sudo” as the container user is root and the “sudo” package is not installed.
A special feature of this example is using Kitware’s CMake APT repo to install the latest version of CMake on an EOL Ubuntu distro.
When CMake fails on the configure or generate steps in a CI workflow, having CMakeConfigureLog.yaml uploaded a as a file can help debug the issue.
Add this step to the GitHub Actions workflow YAML file:
The “retention-days” parameter is optional.
Ensure the “name” parameter is unique to avoid conflicts with other jobs in the workflow.
Here we assume that the OS and C compiler are unique between jobs.
Git signed commits help verify the Git author’s identity using PGP.
Optionally, a user or organization can set rules requiring Git PGP signed commits on Git hosting providers such as
GitHub
and
GitLab
PGP public keys can help verify author identity of Git commits, social media, website, etc.
Setup GPG on the laptop:
Export the GPG public and private key and import into GPG:
If one has Keybase, first export Keybase.io PGP key to GPG.
If one does NOT have Keybase, use gpg --full-generate-key to generate a GPG keypair.
Verify PGP key:
gpg --list-secret-keys --keyid-format LONG
The first lines will be like:
sec rsa4096/<public_hex>
The hexadecimal part after the / is a public reference to the GPG keypair.
Add Git provider such as
GitHub
or
GitLab
verified email address to the PGP key.
To make commits “Verified” with the Git provider, at least one of the Git provider verified email addresses must match:
git config --get user.email
Use the GPG public ID below:
gpg --edit-key <public_hex>
In the interactive GPG session that launches, type
adduid
and enter Name and the Email address–which must exactly match the GitHub verified email address.
I also add the @users.noreply.github.com fake email that I always use to avoid spam.
Do adduid twice–once for the real
GitHub verified email address
and again for the github_username@users.noreply.github.com fake email.
Add “trust” from the GPG> prompt:
trust
Since it’s you, perhaps a trust level of 5 is appropriate.
type
save
to save changes, which may not show up until exiting and reentering the GPG> prompt.
On Windows, even though “gpg” works from Windows Terminal, it’s essential to tell Git the full path to GPG.exe, otherwise Git will fail to sign commits.
Add the GPG public key to the Git provider.
Copy and paste the output from this command into GPG Key of
GitHub
or
GitLab.
This is done only once per human, not once per device.
If you get gpg: signing failed: No secret key or gpg: skipped "...": No secret key, the signing subkey may have expired.
GPG subkeys (encryption, signing) expire independently from the main key.
Check which subkeys are expired:
gpg --list-secret-keys
Look for subkeys marked expired. To extend them:
gpg --edit-key <public_hex>
key 1expire
1y
save
The key N selects which subkey to extend (1 for first, 2 for second, etc.).
Then export the updated key to GitHub.
Certain networks may block traffic on port 22, which causes failure for Git operations like:
ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.
The solution for this is to at least temporarily configure SSH to use port 443, which is typically open for HTTPS traffic.
This can be persistently done by editing the user SSH config file usually located at “/.ssh/config”, here for
GitHub,
where “/.ssh/github” comes from
GitHub SSH key setup.
An alternative is to specify the port directly in the Git remote URL like:
git push ssh://user@host:PORT/path/to/repo.git main
For even more extreme cases such as were HTTP Agent blocking is suspected, try environment variable
GIT_HTTP_USER_AGENT
to mimic a web browser user agent string, for example:
exportGIT_HTTP_USER_AGENT="Mozilla/5.0"# try git commands again
macOS 26.3 upgraded the OpenSSH client to OpenSSH 10.2, which added a warning about SSH server key exchange with non-post-quantum algorithms.
The warning was added in OpenSSH 10.1, but macOS 26.3 is the first macOS release to include it from Apple.
The warning is meant to alert users that their SSH server may not be using these newer, more secure kex algorithms.
** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to “store now, decrypt later” attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html
The message will hopefully get the SSH server system admin to upgrade their SSH server to support post-quantum key exchange algorithms, which will provide better security against future quantum computer attacks.
The OpenSSH PQ page
has more information about the post-quantum key exchange algorithms and how to disable the warning on the client side if necessary.
Disable the warning per host so you’re not totally blind to the security of the SSH servers you connect to.