Automationscribe.com
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automation Scribe
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us
No Result
View All Result
Automationscribe.com
No Result
View All Result

Exploratory Information Evaluation: Gamma Spectroscopy in Python (Half 2)

admin by admin
July 19, 2025
in Artificial Intelligence
0
Exploratory Information Evaluation: Gamma Spectroscopy in Python (Half 2)
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter


half, I did an exploratory information evaluation of the gamma spectroscopy information. We have been capable of see that utilizing a contemporary scintillation detector, we cannot solely see that the item is radioactive. With a gamma spectrum, we’re additionally capable of inform why it’s radioactive and what sort of isotopes the item accommodates.

On this half, we’ll go additional, and I’ll present how you can make and prepare a machine studying mannequin for detecting radioactive parts.

Earlier than we start, an vital warning. All information recordsdata collected for this text can be found on Kaggle, and readers can prepare and check their ML fashions with out having actual {hardware}. If you wish to check actual objects, do it at your individual danger. I did my checks with sources that may be legally discovered and bought, like classic uranium glass or previous watches with radium dial paint. Please verify your native legal guidelines and skim security pointers about dealing with radioactive supplies. Sources used on this check usually are not significantly harmful, however nonetheless have to be dealt with with care!

Now, let’s get began! I’ll present how you can accumulate the information, prepare the mannequin, and run it utilizing a Radiacode scintillation detector. For these readers who would not have Radiacode {hardware}, the hyperlink to the datasource is added on the finish of the article.

Methodology

This text will include a number of components:

  1. I’ll briefly clarify what a gamma spectrum is and the way we are able to use it.
  2. We are going to accumulate the information for our ML mannequin. I’ll present the code for amassing the spectra utilizing the Radiacode system.
  3. We are going to prepare the mannequin and management its accuracy.
  4. Lastly, I’ll make an HTMX-based internet frontend for the mannequin, and we’ll see the leads to real-time.

Let’s get into it!

1. Gamma Spectrum

It is a quick recap of the first half, and for extra particulars, I extremely advocate studying it first.

Why is the gamma spectrum so attention-grabbing? Some objects round us could be barely radioactive. Its sources differ from the naturally occurring radiation of granite within the buildings to the radium in some classic watches or the thorium in trendy thoriated tungsten rods. A Geiger counter solely exhibits us the variety of radioactive particles that have been detected. A scintillation detector exhibits us not solely the variety of particles but in addition their energies. It is a essential distinction—it turned out that totally different radioactive supplies emit gamma rays with totally different energies, and every materials has its personal “footprint.”

As a primary instance, I purchased this pendant within the Chinese language store:

Picture by creator

It was marketed as an “ion-generating,” so I already suspected that the pendant may very well be barely radioactive (an ionizing radiation, as its title suggests, can produce ions). Certainly, as we are able to see on the meter display screen, its radioactivity degree is about 1,20 µSv/h, which is 12 instances increased than the background (0,1 µSv/h). It’s not loopy excessive and corresponding to a degree on an airplane in the course of the flight, however it’s nonetheless statistically important 😉

Nevertheless, by solely observing the worth, we can not inform why the item is radioactive. A gamma spectrum will present us what isotopes are inside the item:

Picture by creator

On this instance, the pendant accommodates thorium-232, and a thorium decay chain produces radium and actinium. As we are able to see on the graph, the actinium-228 peak is nicely seen on the spectrum.

As a second instance, let’s say we’ve got discovered this piece of rock:

Picture supply Wikipedia

That is uraninite, a mineral that accommodates numerous uranium dioxide. Such specimens could be present in some areas of Germany, the Czech Republic, or the US. If we get it within the mineral store, it in all probability has a label on it. However within the area, it’s often not the case 😉 With a gamma spectrum, we are able to see a picture like this:

Picture by creator

By evaluating the peaks with recognized isotopes, we are able to inform that the rock accommodates uranium, however, for instance, not thorium.

A bodily rationalization of the gamma spectrum can be fascinating. As we are able to see on the graph beneath, gamma rays are literally photons and belong to the identical spectrum as seen gentle:

Electromagnetic spectrum, Picture supply Wikipedia

When some individuals suppose that radioactive objects are glowing at nighttime, it’s truly true! Each radioactive materials is certainly glowing with its personal distinctive “colour,” however within the very far and non-visible to the human eye a part of the spectrum.

A second fascinating factor is that solely 10-20 years in the past, gamma-spectroscopy was accessible just for establishments and massive labs (in one of the best case, some used crystals with unknown high quality may very well be discovered on eBay). These days, because of developments in electronics, a scintillation detector could be bought for the value of a mid-range smartphone.

Now, let’s return to our venture. As we are able to see from the 2 examples above, the spectra of various objects are totally different. Let’s create a machine studying mannequin that may mechanically detect numerous parts.

2. Gathering the Information

As readers can guess, our first problem is amassing the samples. I’m not a nuclear establishment, and I don’t have entry to the calibrated check sources like cesium or strontium. Nevertheless, for our process, it isn’t required, and a few supplies could be legally discovered and bought. For instance, americium remains to be utilized in smoke detectors; radium was utilized in portray the watch dials earlier than the Nineteen Sixties; uranium was broadly utilized in glass manufacturing earlier than the Nineteen Fifties, and thoriated tungsten rods are nonetheless produced right this moment and could be bought from Amazon. Even the pure uranium ore could be bought within the mineral retailers; nevertheless, it requires a bit extra security precautions. And a advantage of gamma-spectroscopy is that we don’t must disassemble or break the objects, and the method is usually protected.

The second problem is amassing the information. If you happen to work in e-commerce, then it’s often not an issue, and each SQL request will return tens of millions of data. Alas, within the “actual world,” it may be far more difficult. Particularly if you wish to make a database of the radioactive supplies. In our case, amassing each spectrum requires 10-20 minutes. For each check object, it might be good to have no less than 10 data. As we are able to see, the method can take hours, and having tens of millions of data just isn’t a sensible choice.

For getting the spectrum information, I will probably be utilizing a Radiacode 103G scintillation detector and an open-source radiacode library.

Radiacode detector, Picture by creator

A gamma spectrum could be exported in XML format utilizing the official Radiacode Android app, however the handbook course of is simply too gradual and tedious. As an alternative, I created a Python script that collects the spectra utilizing random time intervals:

from radiacode import RadiaCode, RawData, Spectrum


def read_forever(rc: RadiaCode):
    """ Learn information from the system """
    whereas True:
        interval_sec = random.randint(10*60, 30*60)
        read_spectrum(rc, interval_sec)

def read_spectrum(rc: RadiaCode, interval: int):
    """ Learn and save spectrum """
    rc.spectrum_reset()

    # Learn
    dt = datetime.datetime.now()
    filename = dt.strftime("spectrum-%YpercentmpercentdpercentHpercentMpercentS.json")
    logging.debug(f"Making spectrum for {interval // 60} min")

    # Wait
    t_start = time.monotonic()
    whereas time.monotonic() - t_start < interval:
        show_device_data(rc)
        time.sleep(0.4)

    # Save
    spectrum: Spectrum = rc.spectrum()
    spectrum_save(spectrum, filename)

def show_device_data(rc: RadiaCode):
    """ Get CPS (counts per second) values """
    information = rc.data_buf()
    for report in information:
        if isinstance(report, RawData):
            log_str = f"CPS: {int(report.count_rate)}"
            logging.debug(log_str)

def spectrum_save(spectrum: Spectrum, filename: str):
    """ Save  spectrum information to log """
    duration_sec = spectrum.length.total_seconds()
    information = {
            "a0": spectrum.a0,
            "a1": spectrum.a1,
            "a2": spectrum.a2,
            "counts": spectrum.counts,
            "length": duration_sec,
    }
    with open(filename, "w") as f_out:
        json.dump(information, f_out, indent=4)
        logging.debug(f"File '{filename}' saved")


rc = RadiaCode()
app.read_forever()

Some error dealing with is omitted right here for readability causes. A hyperlink to the total supply code could be discovered on the finish of the article.

As we are able to see, I randomly choose the time between 10 and half-hour, accumulate the gamma spectrum information, and put it aside to a JSON file. Now, I solely want to put a Radiacode detector close to the item and depart the script working for a number of hours. Because of this, 10-20 JSON recordsdata will probably be saved. I additionally must repeat the method for each pattern I’ve. As a closing output, 100-200 recordsdata could be collected. It’s nonetheless not tens of millions, however as we’ll see, it’s sufficient for our process.

3. Coaching the Mannequin

When the information from the earlier step is prepared, we are able to begin coaching the mannequin. As a reminder, all recordsdata can be found on Kaggle, and readers are welcome to make their very own fashions as nicely.

First, let’s preprocess the information and extract the options we wish to use.

3.1 Information Load

When the information is collected, we should always have some spectrum recordsdata saved in JSON format. A person file appears like this:

{
    "a0": 24.524023056030273,
    "a1": 2.2699732780456543,
    "a2": 0.0004327862989157,
    "counts": [ 48, 52, , ..., 0, 35],
    "length": 1364.0
}

Right here, the “counts” array is the precise spectrum information. Totally different detectors might have totally different codecs; a Radiacode returns the information within the type of a 1024-channel array. Calibration constants [a0, a1, a2] enable us to transform the channel quantity into the power in keV (kiloelectronvolt).

First, let’s make a way to load the spectrum from a file:

@dataclass
class Spectrum:
    """ Radiation spectrum measurement information """

    length: int
    a0: float
    a1: float
    a2: float
    counts: listing[int]

    def channel_to_energy(self, ch: int) -> float:
        """ Convert channel quantity to the power degree """
        return self.a0 + self.a1 * ch + self.a2 * ch**2

    def energy_to_channel(self, e: float):
        """ Convert power to the channel quantity (inverse E = a0 + a1*C + a2 C^2) """
        c = self.a0 - e
        return int(
            (np.sqrt(self.a1**2 - 4 * self.a2 * c) - self.a1) / (2 * self.a2)
        )


def load_spectrum_json(filename: str) -> Spectrum:
    """ Load spectrum from a json file """
    with open(filename) as f_in:
        information = json.load(f_in)
        return Spectrum(
            a0=information["a0"], a1=information["a1"], a2=information["a2"],
            counts=information["counts"],
            length=int(information["duration"]),
        )

Now, we are able to draw it with Matplotlib:

import matplotlib.pyplot as plt

def draw_simple_spectrum(spectrum: Spectrum, title: Non-compulsory[str] = None):
    """ Draw spectrum obtained from the Radiacode """
    fig, ax = plt.subplots(figsize=(12, 3))
    ax.spines["top"].set_color("lightgray")
    ax.spines["right"].set_color("lightgray")
    counts = spectrum.counts
    power = [spectrum.channel_to_energy(x) for x in range(len(counts))]
    # Bars
    ax.bar(power, counts, width=3.0, label="Counts")
    # X values
    ticks_x = [
       spectrum.channel_to_energy(ch) for ch in range(0, len(counts), len(counts) // 20)
    ]
    labels_x = [f"{ch:.1f}" for ch in ticks_x]
    ax.set_xticks(ticks_x, labels=labels_x)
    ax.set_xlim(power[0], power[-1])
    plt.ylim(0, None)
    title_str = "Gamma-spectrum" if title is None else title
    ax.set_title(title_str)
    ax.set_xlabel("Vitality, keV")
    plt.legend()
    fig.tight_layout()


sp = load_spectrum_json("thorium-20250617012217.json")
draw_simple_spectrum(sp)

The output appears like this:

Thorium spectrum, picture by creator

What can we see right here?

As was talked about earlier than, from a regular Geiger counter, we are able to get solely the variety of detected particles. It tells us if the item is radioactive or not, however no more. From a scintillation detector, we are able to get the variety of particles grouped by their energies, which is virtually a ready-to-use histogram! A radioactive decay itself is random, so the longer the gathering time, the “smoother” the graph.

3.2 Information Remodel

3.2.1 Normalization
Let’s take a look at the spectrum once more:

Right here, the information was collected for about 10 minutes, and the vertical axis accommodates the variety of detected particles. This strategy has a easy downside: the variety of particles just isn’t a continuing. It is dependent upon each the gathering time and the “power” of the supply. It signifies that we might not have 600 particles like on this graph, however 60 or 6000. We will additionally see that the information is a bit noisy. That is particularly seen with a “weak” supply and a brief assortment time.

To remove these points, I made a decision to make use of a two-step pipeline. First, I utilized the Savitzky-Golay filter to cut back the noise:

from scipy.sign import savgol_filter

def smooth_data(information: np.array) -> np.array:
    """ Apply 1D smoothing filter to the information array """
    window_size = 10
    data_out = savgol_filter(
        information,
        window_length=window_size,
        polyorder=2,
    )
    return np.clip(data_out, a_min=0, a_max=None)

It’s particularly helpful for spectra with quick assortment instances, the place the peaks usually are not so nicely seen.

Second, I normalized a NumPy array to 0..1 by merely dividing its values by the utmost.

A closing “normalize” technique appears like this:

def normalize(spectrum: Spectrum) -> Spectrum:
    """ Normalize information to the vertical vary of 0..1 """
    # Easy information
    counts = np.array(spectrum.counts).astype(np.float64)
    counts = smooth_data(counts)

    # Normalize
    val_norm = counts.max()
    return Spectrum(
        length=spectrum.length,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = counts/val_norm
    )

Because of this, spectra from totally different sources now have an analogous scale:

Picture by creator

As we are able to additionally see, the distinction between the 2 samples is sort of seen.

3.2.2 Information Augmentation
Technically, we’re prepared to coach the mannequin. Nevertheless, as we noticed within the “Gathering the information” half, the dataset is fairly small – I’ll have solely 100-200 recordsdata in whole. The answer is to reinforce the information by including extra artificial samples.

As a easy strategy, I made a decision so as to add some noise to the unique spectra. However how a lot noise ought to we add? I chosen a 680 keV channel as a reference worth, as a result of this half has no attention-grabbing isotopes. Then I added a noise with 50% of the amplitude of that channel. A np.clip name ensures that the information values usually are not detrimental (for the quantity of detected particles, it doesn’t make bodily sense).

def add_noise(spectrum: Spectrum) -> Spectrum:
    """ Add random noise to the spectrum """
    counts = np.array(spectrum.counts)    
    ch_empty = spectrum.energy_to_channel(680.0)
    val_norm = counts[ch_empty]

    ampl = val_norm / 2
    noise = np.random.regular(0, ampl, counts.form)
    data_out = np.clip(counts + noise, min=0)
    return Spectrum(
        length=spectrum.length,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = data_out
    )

sp = load_spectrum_json("thorium-20250617012217.json")
sp = add_noise(normalize(sp))
draw_simple_spectrum(sp, filename)

The output appears like this:

Picture by creator

As we are able to see, the noise degree just isn’t that large, so it doesn’t distort the peaks. On the similar time, it provides some range to the information.

A extra refined strategy can be used. For instance, some radioactive minerals include thorium, uranium, or potassium in numerous proportions. It could be doable to mix spectra of present samples to get some “new” ones.

3.2.3 Function Extraction
Technically, we are able to use all 1024 values “as is” as an enter for our ML mannequin. Nevertheless, this strategy has two issues:

  • First, it’s redundant – we’re largely solely particularly isotopes. For instance, on the final graph, there’s a good seen peak at 238 keV, which belongs to Lead-212, and a much less seen peak at 338 keV, which belongs to Actinium-228.
  • Second, it’s device-specific. I need a mannequin to be common. Utilizing solely the energies of the chosen isotopes as enter permits us to make use of any gamma spectrometer mannequin.

Lastly, I created this listing of isotopes:

isotopes = [ 
    # Americium
    ("Am-241", 59.5),
    # Potassium
    ("K-40", 1460.0),
    # Radium
    ("Ra-226", 186.2),
    ("Pb-214", 242.0),
    ("Pb-214", 295.2),
    ("Pb-214", 351.9),
    ("Bi-214", 609.3),
    ("Bi-214", 1120.3),
    ("Bi-214", 1764.5),
    # Thorium
    ("Pb-212", 238.6),
    ("Ac-228", 338.2),
    ("TI-208", 583.2),
    ("AC-228", 911.2),
    ("AC-228", 969.0),
    # Uranium
    ("Th-234", 63.3),
    ("Th-231", 84.2),
    ("Th-234", 92.4),
    ("Th-234", 92.8),
    ("U-235", 143.8),
    ("U-235", 185.7),
    ("U-235", 205.3),
    ("Pa-234m", 766.4),
    ("Pa-234m", 1000.9),
]

def isotopes_save(filename: str):
    """ Save isotopes listing to a file """
    with open(filename, "w") as f_out:
        json.dump(isotopes, f_out)

Solely spectrum values for these isotopes will probably be used as enter for the mannequin. I additionally created a way to avoid wasting a listing into the JSON file – will probably be used to load the mannequin later. Some isotopes, like Uranium-235, could also be current in minuscule quantities and never be virtually detectable. Readers are welcome to enhance the listing on their very own.

Now, let’s create a way that converts a Radiacode spectrum to a listing of options:

def get_features(spectrum: Spectrum, isotopes: Checklist) -> np.array:
    """ Extract options from the spectrum """
    energies = [energy for _, energy in isotopes]
    information = [spectrum.counts[spectrum.energy_to_channel(energy)] for power in energies]
    return np.array(information)

Virtually, we transformed the listing of 1024 values to a NumPy array with solely 23 parts, which is an effective dimension discount!

3.3 Coaching

Lastly, we’re prepared to coach the ML mannequin.

First, let’s mix all recordsdata into one dataset. Virtually, it is dependent upon the samples you might have and should appear like this:

all_files = [
    ("Americium", glob.glob("../data/train/americium*.json")),
    ("Radium", glob.glob("../data/train/radium*.json")),
    ("Thorium", glob.glob("../data/train/thorium*.json")),
    ("Uranium Glass", glob.glob("../data/train/uraniumGlass*.json")),
    ("Uranium Glaze", glob.glob("../data/train/uraniumGlaze*.json")),
    ("Uraninite", glob.glob("../data/train/uraninite*.json")),
    ("Background", glob.glob("../data/train/background*.json")),
]

def prepare_data(augmentation: int) -> Tuple[np.array, np.array]:
    """ Put together information for coaching """
    x, y = [], []
    for title, recordsdata in all_files:
        for filename in recordsdata:
            print(f"Processing {filename}...")
            sp = normalize(load_spectrum(filename))
            for _ in vary(augmentation):
                sp_out = add_noise(sp)
                x.append(get_features(sp_out, isotopes))
                y.append(title)

    return np.array(x), np.array(y)


X_train, y_train = prepare_data(augmentation=10)

As we are able to see, our y-values include names like “Americium.” I’ll use a LabelEncoder to transform them into numeric values:

from sklearn.preprocessing import LabelEncoder


le = LabelEncoder()
le.match(y_train)
y_train = le.rework(y_train)

print("X_train:", X_train.form)
#> (1900, 23)

print("y_train:", y_train.form)
#> (1900,)

I made a decision to make use of an open-source XGBoost mannequin, which relies on gradient tree boosting (authentic paper hyperlink). I can even use a GridSearchCV to search out optimum parameters:

from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV


bst = XGBClassifier(n_estimators=10, max_depth=2, learning_rate=1)
clf = GridSearchCV(
    bst,
    {
        "max_depth": [1, 2, 3, 4],
        "n_estimators": vary(2, 20),
        "learning_rate": [0.001, 0.01, 0.1, 1.0, 10.0]
    },
    verbose=1,
    n_jobs=1,
    cv=3,
)
clf.match(X_train, y_train)

print("best_score:", clf.best_score_)
#> best_score: 0.99474

print("best_params:", clf.best_params_)
#> best_params: {'learning_rate': 1.0, 'max_depth': 1, 'n_estimators': 9}

Final however not least, I want to avoid wasting the educated mannequin:

isotopes_save("../fashions/V1/isotopes.json")
bst.save_model("../fashions/V1/XGBClassifier.json")
np.save("../fashions/V1/LabelEncoder.npy", le.classes_)

Clearly, we want not solely the mannequin itself but in addition the listing of isotopes and labels. If we alter one thing, the information is not going to match anymore, and the mannequin will produce rubbish, so mannequin versioning is our buddy!

To confirm the outcomes, I want information that the mannequin didn’t “see” earlier than. I already collected a number of XML recordsdata utilizing the Radiacode Android app, and only for enjoyable, I made a decision to make use of them for testing.

First, I created a way to load the information:

import xmltodict

def load_spectrum_xml(file_path: str) -> Spectrum:
    """ Load the spectrum from a Radiacode Android app file """
    with open(file_path) as f_in:
        doc = xmltodict.parse(f_in.learn())
        end result = doc["ResultDataFile"]["ResultDataList"]["ResultData"]
        spectrum = end result["EnergySpectrum"]
        cal = spectrum["EnergyCalibration"]["Coefficients"]["Coefficient"]
        a0, a1, a2 = float(cal[0]), float(cal[1]), float(cal[2])
        length = int(spectrum["MeasurementTime"])
        information = spectrum["Spectrum"]["DataPoint"]
        return Spectrum(
            length=length,
            a0=a0, a1=a1, a2=a2,
            counts=[int(x) for x in data],
        )

It has the identical spectra values that I used within the JSON recordsdata, with some further information that isn’t required for our process.

Virtually, that is an instance of information assortment. This Victorian creamer from the Nineties is 130 years previous, and belief me, you can not get this information through the use of an SQL request 🙂

Picture by creator

This uranium glass is barely radioactive (the background degree is about 0,08 µSv/h), however it’s at a protected degree and can’t produce any hurt.

The check code itself is easy:

# Load mannequin
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")

# Load information
test_data = [
    ["../data/test/background1.xml", "../data/test/background2.xml"],
    ["../data/test/thorium1.xml", "../data/test/thorium2.xml"],
    ["../data/test/uraniumGlass1.xml", "../data/test/uraniumGlass2.xml"],
    ...
]

# Predict
for group in test_data:
    information = []
    for filename in group:
        spectrum = load_spectrum(filename)
        options = get_features(normalize(spectrum), isotopes)
        information.append(options)

    X_test = np.array(information)
    preds = bst.predict(X_test)
    preds = le.inverse_transform(preds)
    print(preds)

#> ['Background' 'Background']
#> ['Thorium' 'Thorium']
#> ['Uranium Glass' 'Uranium Glass']
#> ...

Right here, I additionally grouped the values from totally different samples and used batch prediction.

As we are able to see, all outcomes are right. I used to be additionally going to make a confusion matrix, however no less than for my comparatively small variety of samples, all objects have been detected correctly.

4. Testing

As a closing a part of this text, let’s use the mannequin in real-time with a Radiacode system.

The code is nearly the identical as in the beginning of the article, so I’ll present solely the essential components. Utilizing the radiacode library, I connect with the system, learn the spectra as soon as per minute, and use these values to foretell the isotopes:

from radiacode import RadiaCode, RealTimeData, Spectrum
import logging


le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")


def read_spectrum(rc: RadiaCode):
    """ Learn spectrum information """
    spectrum: Spectrum = rc.spectrum()
    logging.debug(f"Spectrum: {spectrum.length} assortment time")
    end result = predict_spectrum(spectrum)
    logging.debug(f"Predict: {end result}")

def predict_spectrum(sp: Spectrum) -> str:
    """ Predict the isotope from a spectrum """
    options = get_features(normalize(sp), isotopes)
    preds = bst.predict([features])
    return le.inverse_transform(preds)[0]

def read_cps(rc: RadiaCode):
    """ Learn CPS (counts per second) values """
    information = rc.data_buf()
    for report in information:
        if isinstance(report, RealTimeData):
             logging.debug(f"CPS: {report.count_rate:.2f}")


if __name__ == '__main__':
    logging.basicConfig(
        degree=logging.DEBUG, format="[%(asctime)-15s] %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S"
    )

    rc = RadiaCode()
    logging.debug(f"ML mannequin loaded")
    fw_version = rc.fw_version()
    logging.debug(f"System related:, firmware {fw_version[1]}")
    rc.spectrum_reset()
    whereas True:
        for _ in vary(12):
            read_cps(rc)
            time.sleep(5.0)

        read_spectrum(rc)

Right here, I learn the CPS (counts per second) values from the Radiacode each 5 seconds, simply to make sure that the system works. Each minute, I learn the spectrum and use it with the mannequin.

Earlier than working the app, I positioned the Radiacode detector close to the item:

Picture by creator

This classic watch was made within the Nineteen Fifties, and it has radium paint on the digits. Its radiation degree is ~5 instances the background, however it’s nonetheless inside a protected degree (and it’s truly 2 instances decrease than everybody will get in an airplane throughout a flight).

Now, we are able to run the code and see the leads to real-time:

As we are able to see, the mannequin’s prediction is right.

Readers who don’t have a Radiacode {hardware} can use uncooked log recordsdata to replay the information. The hyperlink is added to the tip of the article.

Conclusion

On this article, I defined the method of making a machine studying mannequin for predicting radioactive isotopes. I additionally examined the mannequin with some radioactive samples that may be legally bought.

I additionally did an interactive HTMX frontend for the mannequin, however this text is already too lengthy. If there’s a public curiosity on this matter, this will probably be revealed within the subsequent half.

As for the mannequin itself, there are a number of methods for enchancment:

  • Including extra information samples and isotopes. I’m not a nuclear establishment, and my alternative (from not solely monetary or authorized views, but in addition contemplating the free area in my condominium) is restricted. Readers who’ve entry to different isotopes and minerals are welcome to share their information, and I’ll attempt to add it to the mannequin.
  • Including extra options. On this mannequin, I normalized all spectra, and it really works nicely. Nevertheless, on this approach, we lose the details about the radioactivity degree of the objects. For instance, the uranium glass has a a lot decrease radiation degree in comparison with the uranium ore. To tell apart these objects extra successfully, we are able to add the radioactivity degree as a further mannequin function.
  • Testing different mannequin varieties. It appears promising to make use of a vector search to search out the closest embeddings. It can be extra interpretable, and the mannequin can present a number of closest isotopes. A library like FAISS could be helpful for that. One other approach is to make use of a deep studying mannequin, which can be attention-grabbing to check.

On this article, I used a Radiacode radiation detector. It’s a good system that enables making some attention-grabbing experiments (disclaimer: I don’t have any revenue or different industrial curiosity from its gross sales). For these readers who don’t have a Radiacode {hardware}, all collected information is freely accessible on Kaggle.

The complete supply code for this text is offered on my Patreon web page. This help helps me to purchase tools or electronics for future checks. And readers are additionally welcome to attach through LinkedIn, the place I periodically publish smaller posts that aren’t sufficiently big for a full article.

Thanks for studying.

Tags: analysisDataExploratoryGammaPartPythonSpectroscopy
Previous Post

Construct real-time journey suggestions utilizing AI brokers on Amazon Bedrock

Next Post

Deploy a full stack voice AI agent with Amazon Nova Sonic

Next Post
Deploy a full stack voice AI agent with Amazon Nova Sonic

Deploy a full stack voice AI agent with Amazon Nova Sonic

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular News

  • How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    How Aviva constructed a scalable, safe, and dependable MLOps platform utilizing Amazon SageMaker

    401 shares
    Share 160 Tweet 100
  • Diffusion Mannequin from Scratch in Pytorch | by Nicholas DiSalvo | Jul, 2024

    401 shares
    Share 160 Tweet 100
  • Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Growth Assist Program

    401 shares
    Share 160 Tweet 100
  • Proton launches ‘Privacy-First’ AI Email Assistant to Compete with Google and Microsoft

    401 shares
    Share 160 Tweet 100
  • Streamlit fairly styled dataframes half 1: utilizing the pandas Styler

    400 shares
    Share 160 Tweet 100

About Us

Automation Scribe is your go-to site for easy-to-understand Artificial Intelligence (AI) articles. Discover insights on AI tools, AI Scribe, and more. Stay updated with the latest advancements in AI technology. Dive into the world of automation with simplified explanations and informative content. Visit us today!

Category

  • AI Scribe
  • AI Tools
  • Artificial Intelligence

Recent Posts

  • Deploy a full stack voice AI agent with Amazon Nova Sonic
  • Exploratory Information Evaluation: Gamma Spectroscopy in Python (Half 2)
  • Construct real-time journey suggestions utilizing AI brokers on Amazon Bedrock
  • Home
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 automationscribe.com. All rights reserved.

No Result
View All Result
  • Home
  • AI Scribe
  • AI Tools
  • Artificial Intelligence
  • Contact Us

© 2024 automationscribe.com. All rights reserved.