Automated Linac QA Using Scripting and Varian Developer Mode

Abstract

Purpose: Linac quality assurance (QA) can be time consuming involving set up, execution, analysis and subject to user variability. The purpose of this study is to develop qualitative automation tools for mechanical and imaging QA to improve efficiency, consistency, and accuracy. Methods and Materials: Traditionally QA has been performed with graph paper, film, and multiple phantoms. Analysis consists of ruler and vendor provided software. We have developed a single four-phantom method for QA procedures including light-radiation coincidence, imaging quality, table motion and Isocentricity and separately cone beam computed tomography. XML scripts were developed to execute a series of tasks using Varian’s Truebeam Developer Mode. Non-phantom QA procedures have also been developed including field size, dose rate, MLC position, MLC and gantry speed, star shot, Winston-Lutz and Half Beam Block. All analysis is performed using inhouse MATLAB codes. Results: Overall time savings were 2.2 hours per Linac per month. Consistency improvements (standard deviation, STD) were observed for some tests. For example: field size improved from 0.11 mm to 0.04 mm and table motion improved from 0.17 mm to 0.12 mm. CBCT STD improved from 0.99 mm to 0.61 mm for slice thickness. No STD change was observed for Isocentricity test. We noticed an increase in STD from 0.33 mm to 0.41 mm for light-radiation coincidence test. There was a small drop in field size accuracy. Isocentricity showed an increase in measurement accuracy from 0.47 mm to 0.15 mm. Table motion increased in accuracy from 0.20 mm to 0.16 mm. Conclusion: Automation is a viable, accurate and efficient option for monthly and annual QA.

Share and Cite:

Pearman, K. , Koch, N. , Wiant, D. , Liu, H. and Sintay, B. (2021) Automated Linac QA Using Scripting and Varian Developer Mode. International Journal of Medical Physics, Clinical Engineering and Radiation Oncology, 10, 149-168. doi: 10.4236/ijmpcero.2021.104013.

1. Introduction

Radiotherapy is an important and effective modality used in cancer treatment, with the goal to deliver the intended prescription dose to the tumor while sparing the normal tissue as much as possible. The International Commission on Radiation Units and Measurements (ICRU) recommends that the delivered dose be within 5% of the prescribed dose [1]. Several quality assurance (QA) protocols for the linear accelerator have been established in the last couple of decades to achieve this goal [2] [3] [4]. Although specific recommendations have been given in those reports, it is the responsibility of the qualified medical physicist to develop a QA program that is accurate, sensitive, efficient and meets the needs of the treatment techniques used at the facility.

For Linac mechanical and imaging tests, traditionally QA procedures have involved a variety of manual methods (e.g. ruler, graph paper, film), these methods could be inefficient and subject to user variation. With the help of the electronic portal imaging device (EPID), tools have been developed to perform certain specific QA tasks: such as, for Multi-leaf Collimator (MLC) leaf position accuracy [5] [6], and for light-radiation field congruence [7]. Furthermore, hardware and software tools have been developed for Linac daily QA using the EPID and kV onboard imaging (OBI) [8] [9]. More recently, Varian medical system released TrueBeam Developer Mode and XML-scripting, making it possible to automate many QA procedures, thus improving efficiency and consistency. Attempts have been made by using the developer mode for imaging QA tasks [10]. A QA committee was formed at our facility in 2018, with the aim to address QA guidelines from all available recommendations and decide on what could be automated to benefit the existing QA program.

The purpose of this study is to develop quantitative automation QA tools by using EPID, OBI, Varian Developer Mode and XML-scripting for Varian TrueBeam Linac (Varian Medical System, Palo Alto, CA), and discuss the potential benefit of the new QA tools in terms of efficiency, consistency, and accuracy.

2. Material and Methods

2.1. Equipment

Our facility has 6 Varian TrueBeam Linac accelerators, five with the Millennium MLC) with minimum leaf width 0.5 cm, and one with the high definition (HD) MLC with minimum leaf width 0.25 cm. The Millennium-MLC linacs use a Varian Exact IGRT table, and the HD-MLC Linac uses a Brainlab (Munich, Germany) table. All Linacs use 1090 × 1090 pixels (0.3360 mm pixel separation) MV imaging panel and 2048 × 1536 pixels (0.1940 mm pixel separation) kV imaging panel. Twelve continuous months of data were collected using the manual method and then compared with twelve additional months of data collected with the new method.

2.2. Data Acquisition and Analysis

2.2.1. Phantom QA

We have developed a single four-phantom method for QA procedures including light-radiation coincidence, table travel range, isocentricity, kV and MV imaging quality. Phantoms used are: (P1) SN phantom (Sun Nuclear, Melbourne, FL) for Light-radiation coincidence test; (P2) Varian IGRT phantom for table travel range and Isocentricity test; (P3) Las Vegas phantom for MV imaging quality test (contrast and spatial resolution); (P4) Leeds phantom (Leeds Test Objects, Ltd, UK) for kV imaging quality test (contrast, geometry, spatial resolution, and uniformity).

The four-phantom method involves placing all phantoms on the table at a set index in a linear order, as shown in Figure 1(A) for Varian IGRT couch and Figure 1(B) for BrainLab couch. The order is chosen based on the avoidance of table thickness variation. The MV imaging panel is set to -50 cm when imaging through the table and its position is accounted for in the data analysis.

A series of XML scripts were written and run using Varian’s developer mode to automate tests using the EPID and OBI. The first XML script to run drives the table to the initial start position. The vault is entered only once to index and align the phantoms by laser and indexing bar. The order each script is executed is based on the most efficient process that saves personnel time. The generated images captured by the EPID are exported to a machine specific folder, and data analysis is performed with a MatLab script that pulls the images from the folders automatically. The results are auto exported to a Microsoft Excel spreadsheet.

· Light-radiation (LR) coincidence

Old method: LR coincidence and half-beam-block are checked with radiochromic film. Film is placed on the table at 100 source-to-surface distance (SSD) and the light field edges are marked. The exposed film is compared to the marks and the difference recorded in the Excel spreadsheet.

New method: LR coincidence is confirmed using phantom P1. The phantom is aligned to the machine cross hairs and the light field edges are aligned to the fiducials by manually moving the jaws. This new jaw position is saved at the treatment station. The produced image indicates the radiation field edge along with the light field edge (fiducials). Analysis is performed by drawing a line profile

Figure 1. Four phantom setup for both table types. It was necessary to change the order of the phantom due to the varaitions in the table thickness. Scripts had to account for this variability. P1: light and radiation coincidence, P2: table travel and isocentricity, P3: MV imaging, P4: kV imaging.

across the radiation edge. The edges of the radiation field are found by full width at half maximum (FWHM) and the fiducial by finding the center of mass (COM) of an ROI drawn around the fiducial. The difference between the radiation profile edge and the COM of the fiducial is reported in the Excel spreadsheet.

· Isocentricity

Old method: Isocentricity is a check of gantry and table rotation centricity using the kV and MV imagers. The isocentricity test is performed by indexing the IGRT phantom (P2) on the table, aligning the phantom to the cross hairs, and then imaged with EPID. The analysis is done in Varian’s Offline Review (VOLR) by drawing a measurement line from the center of the fiducial to the digital cross hair. The line length is reported in the Excel spreadsheet.

New method: Isocentricity is determined using phantom P2. The P2 phantom is moved to machine isocenter by the XML script with pre-defined table position. The table position is determined by rotating the gantry to four cardinal positions while the phantom is imaged with kV and MV beams. The measured error is determined by using the Varian measurement tool at the treatment station. The error is continually reduced until the table position is optimized.

The fiducial position within the phantom is found by two separate methods in the analysis script. Method 1 uses the optimal table position that is hard coded in the script as a baseline found above. The second method assumes the panel center is the isocenter. The measured fiducial center is compared in both positions. The lesser error difference is reported and assumed to be more accurate. Tableand panel error are acquired from the DICOM header information and subtracted for improved accuracy but in general appear to be insignificant.

There are three methods used to find the fiducial center. The COM (1) is found using the same method described above. A gradient method (2) is performed by drawing two profiles in X and Y directions across the fiducial center (assumed to be the darkest pixel) within a region of interest (ROI). The profile is curve fitted (MATLAB’s Cubic Spline) then the gradient taken of both profiles. The maximum and minimum of the gradient represents the fiducial edges. The center point of the edges is taken as the fiducial center. The contour method (3) is identical to the gradient method but contours the fiducial edges in the X and Y directions. A gradient threshold had to be added to increase the accuracy of what is determined to be a fiducial edge when on the far sides of the fiducial. The average value of the X and Y center is assumed to be the fiducial center.

· Tabletravel range

Old method: Tablemotion is measured with graph paper and ruler by translating the table by 10 cm in 3-dimentions. The graph paper is aligned to machine cross hairs to check lateral and longitudinal translational motion, and the vertical motion was checked by attaching a ruler to a phantom. The difference between the graph paper/ruler reading and the Varian digital reading is recorded in the Excel spreadsheet.

New method: Tabletravel range is measured using phantom P2. After the initial setup of the phantom a kV image is performed, the table is translated in lateral and longitudinal directions creating two images using the kV imager and then in vertical direction creating two more images after the gantry is rotated 90 degrees. The analysis is performed by locating the phantom edges then drawing an ROI around the phantom center containing the fiducial. To find the center fiducial, the same three methods described above are used. The difference between the two fiducial centers after table translation is then calculated. The most accurate method is recorded in the Excel spreadsheet to represent the results.

· MV imaging

Old method: The Las Vegas phantom is indexed on the table and imaged with the EPID to determine MV imaging quality. The analysis is performed in VOLR by visually counting the contrast circles.

New method: The Las Vegas phantom is indexed on the table and aligned to the lateral laser. The phantom edge is identified in the acquired image, and six profiles are drawn across the six columns of contrast circles and then smoothed. Each contrast circle center is found by the gradient method described above. This was performed with both 6 MV and later 2.5 MV, for improved image quality. A contrast circle is defined by a viable peak in the profile found using 1) the gradient method, 2) by a defined tolerance above the background signal, and 3) the separation between peaks. The peaks are counted and compared to the established tolerance and a pass or fail rate reported.

The background is determined by finding the lowest values between each peak using the second gradient of the profile. A curve is then fit to these points to represent the background. A threshold is then set based on the least detectable peak above the background as a qualifier as a viable peak. A third qualifier is the distance between the peaks to avoid any invalid peaks. The background is subtracted from the profile(s) and the peaks are then searched for using the above validators.

· kV imaging

Old method: The Leeds phantom is placed on the kV imager and imaged. The analysis is performed in VOLR by using the available analysis tools. Circles and resolution lines are counted visually, spatial resolution and uniformity are checked with the measurement tools.

New method: The Leeds phantom is indexed by a bar longitudinally and aligned to lateral laser during initial setup. Rotation is determined by fiducials placed on the phantom at four cardinal positions allowing for more consistent set up and analysis. Further Leeds image alignment is accomplished in the script by rotating the image ± 5˚ while measuring the central rectangle width until a minimum is found indicating a square alignment, then the diagonal is calculated.

The phantom geometry is calibrated by a baseline image with the phantom placed directly on the kV imager that gives a diagonal measurement of the central rectangle of 5 cm. The phantom surface distance from laser was then measured to the vertical laser by translating the panel vertically. The phantom was then placed on the table and its surface placed at the vertical laser and the table position recorded. The table was translated another −10 cm in the vertical dimension as a second measurement. This allows the projection value of the central rectangle diagonal image to be calculated. Two vertical positions are used and then the average is reported. The central rectangle edges are found and then the diagonal calculated.

The phantom center is found by first finding the phantom cardinal edges using a MatLab function. From the center, the contrast circles are found at a specified radius. Circular profiles are drawn, smoothed and the contrast circles are found using a method that looks for points surrounded by consecutive values that are less. The spatial resolution lines are counted by drawing three profiles across this phantom region and then smoothed. The peaks are then counted by the same method just described. The uniformity is found by drawing an ROI at a consistent uniform position in the phantom.

· CBCT

Catphan®504 and Catphan®604 (The Phantom Laboratory, Inc., Greenwich, NY) are used for the cone beam computed tomography (CBCT) QA in a separate set-up due to table room constraints.

Old method: The analysis is performed in VOLR. The resolution test is performed by scrolling to the line pair section of the phantom and counting the number of line pairs that are discernable. Similarly, contrast was determined by counting the discernable circles in the supra-slice 1% section. HU uniformity is determined by drawing a 10 × 10 mm ROI in a uniform section of the phantom at 5 cardinal positions and comparing to the center ROI. HU contrast was measured by placing a 7 × 7 mm ROI centered in each density plug. Slice thickness is measured using the recommended method in the CAT phantom manual utilizing the FWHM of a profile draw across a slice.

New method: The new method of analysis is performed in a custom MatLab script. A specific slice is first located in the image set by utilizing a fiducial marker. Specific slices are then located for consistency based on set distance from this slice that are used for each image quality analysis. The phantom edges are found using a MatLab function and then the center calculated for the below analysis.

ROIs are drawn for HU uniformity and HU contrast at specific points measured from the phantom center to locate the same position for consistency. The pixel value average of the ROI is then found and converted to HU by the formula: (pixel average value) * RescaleSlope + RescaleIntercept pulled from DICOM header information. The resolution and supra-slice 1% test are performed by drawing a semi-circular profile across each section. The peaks are counted and compared to a tolerance. The peaks are found using the same method described in the Leeds section above. Slice thickness is determined by drawing a profile across the phantom ramp in transverse view. There are two methods used to calculate the slice thickness. One uses the FWHM of the profile drawn in transverse view and the other finds the minimum and maximum gradient and assumes these are the slice edges. The slice thickness is the distance between these edges. The value closest to target is reported.

The holes for the spatial measurement are found by using a Matlab edge detection function as before. The center of the contour is assumed to be the hole center. Spatial calibration was performed by measuring the phantom width in pixels within the MatLab script. VOLR was used to measure the phantoms width in cm with the provided measurement tool. An average value of phantom width in cm is used in the script to give pixels per cm.

· Winston-Lutz test

Old method: The Winston-Lutz test is performed by attaching BrainLab pointer to the table and aligning it to the cross hairs and then imaging by EPID. There was a combination of eight gantry and table angles used that required the vault to be entered before each image is taken.

New method: The Brainlab pointer is set up as in the old method and then an XML script is run. Multiple gantry angles and couch kicks are used when imaging the BrainLab pointer.

The analysis is done using a MATLAB script for both old and new methods.

2.2.2. Non-Phantom QA

· Field size (FS)

Old method: FS is checked by placing graph paper or a ruler on the table at 100 source-to-surface distance (SSD). Three field sizes of 5 cm, 10 cm and 25 cm were manually programmed, and measurements are taken with graph paper.

New method: The FS is determined by imaging three field sizes of 5 cm, 10 cm, and 25 cm with the EPID. The imager is at isocenter when the images are generated. In the analysis, a profile is drawn in X and Y directions and overlaid in a plot. The widths are calculated from the panel center then added to give the FS width. The field size edges are found by FWHM and the distance between the edges is calculated as the field size width. Calibration was performed by calibrating the jaws first with graph paper at ISO. Multiple images of 3 sizes were then imaged at ISO. An average pixel per cm was then found and used in calculations.

· MLC leaf position accuracy

Old method: To confirm MLC Position, radiation is delivered to the EPID in the form of eight sliding windows where each MLC ending edge becomes the beginning edge of the next slide. This is performed at four cardinal gantry angles. These images are analyzed in VOLR. The method is to draw an ROI around each MLC junction and monitor the computed unit (CU). This is not a true individualized position measurement but quantitative measurement of combined MLC positions.

New method: The absolute MLC position confirmation is a true position check and compared to the treatment planning system (TPS) position. First, the imager is placed at isocenter. Next, the XML script duplicates the procedure in the old method. There are four gantry angles with eight images each with 60 individual MLC for banks A and B giving 3840 positions to be analyzed. Each MLC leaf is found by locating the first MLC lateral position and then calculating the lateral position of the remaining leaves. In the MATLAB analysis script, a profile is drawn across the end of each leaf then fit to a Cubic Spline curve. The position is plotted for each image. MLC bank, slide and gantry angle are separated in the plot. Standard Deviation (SD) is reported for four individual gantry angles and a sum of all gantry angles.

· Star shot

Old method: Film is used for MLC, jaw, table, and gantry star shots. Star shots are a spoke wheel pattern formed as the gantry, jaw or table are rotated while exposing the film to individual radiation lines formed by jaw or MLC creating a star or spoke pattern on the film. The intersection of each spoke determines the centricity of either the gantry, collimator, or table as it rotates.

New method: XML scripts have been developed for MLC and jaw star shots and images recorded by EPID. Tableand gantry star shots are still performed with film.

For both old and new methods, images are imported into FILMQATM PRO (Ashland, KY) software for further analysis.

· Dose rate, gantry, and MLC speed

Dose rate, gantry and MLC speed are new to our QA program and are fully automated and have no comparison to an old method. As shown in Figure 2(A), seven identical adjoining sub-fields are imaged and combined. A separate open field of the same size is also imaged. The changes of dose rate are measured by delivering each of the seven fields with a different gantry speed using the same MUs. If the gantry speed and dose rate are uniform in MU delivery, then the individual seven fields should have the same intensity when compared to the

Figure 2. Images for dose rate, gantry, and MLC speed.

identical position in the open field.

Similarly, MLC speed is measured by maintaining dose rate MU/degree by varying gantry and MLC speed to maintain the same MU delivered to each sub-field, as shown in Figure 2(B). An open field is also imaged and compared as before.

3. Results

3.1. Phantom QA

· LR coincidence

Figure 3 shows an example of LR coincidence result extracted from the acquired images using in-house MatLab code where profiles are drawn across the image. The SD indicated an increase from 0.033 cm to 0.041 cm and a separation distance (error) of −0.008 cm to −0.015 cm comparing new to old method. A closer look at the raw data from the old manual method shows a large tendency to record zero error by the user. This is assumed to be an over approximation by the user to indicate the pen mark is close enough to the radiation edge and the inability of the user to visually see submillimeter differences of a fuzzy radiation edge and a somewhat large pen mark. The reported values of the new method are well within tolerances and more than acceptable.

· Isocentricity

Figure 4 shows an example of the old and new analysis for phantom P2. The SD remained unchanged at 0.014 cm for both old and new methods, however, the error (distance from cross hair) improved from 0.047 cm to 0.015 cm.

· Tabletravel range

Figure 5 shows an example of the new mothed finding a fiducial center for

Figure 3. Light/radiation coincidence for the new method analysis.

Figure 4. Isocentrity. Yellow line is what is expected, and the blue is the measured.

Figure 5. Table translation. Center of fiducial found using the new method.

phantom P2. There are three methods to find the fiducial center that were previously described. Compared to old method, the SD improved from 0.017 cm to 0.012 cm and the error (distance from target position) improved from 0.02 cm to 0.016 cm for the new method.

· MV imaging

Figure 6 shows the new method analysis for MV image quality. Each of the six boxes shows the result for a profile drawn across a column of circles. The asterisk indicates a valid circle. A green asterisk indicates the tolerance is met. The background is displayed as an orange solid line. The threshold is displayed as a dashed red line. The profile is displayed as a solid blue line. There is no statistical comparison since the tolerance is met for both methods.

· kV imaging

Figure 7 shows the analysis for Leeds phantom P4 that contains contrast circles, spatial resolution lines, geometry measurements and SD of a uniform area. SD shows an improvement from 1.327 to 0.140. This could be due to imaging through the table, but it is uncertain why imaging through the table improves SD. Also, the placement of the ROI is certainly more consistent compared to the old method. Other analysis parameters do not have a comparison since tolerance is always met for both methods.

· CBCT

The resolution, HU contrast and HU uniformity test results showed very little difference between the new and old methods which is expected for uniformity since there should be little impact based on where the user manually places the ROI compared to the script. The spatial test indicated a decrease in accuracy with the new method from 0.010 cm to 0.016 cm as compared to a 5 cm target. SD improved from 0.017 to 0.011. Slice thickness accuracy was slightly lower from 0.034 cm to 0.038 cm with the new method and SD improved from 0.099 to 0.061 with the new method. Figure 8 shows the analysis images for two (slice thickness and spatial resolution) of six images for the new method.

· WL

The analysis did not change from the old method to the new method only the

Figure 6. Las Vegas. Contrast circle detected by new method script for 2.5 MV image.

Figure 7. Leeds. (A) shows the analysis for the contrast circles and (B) shows the analysis for spatial resolution. *represents found peaks.

delivery was different. The results are displayed in Figure 9. There are a total of 8 gantry and couch angles with a difference between fiducial and ISO reported.

3.2. Non-Phantom QA

· FS

SD improved from 0.011 to 0.004 with the new method. Error (results compared to target width) increased from 0.007 cm to 0.027 cm. See Figure 10(A)

Figure 8. CBCT analysis for slice thickness, and spatial accuracy.

Figure 9. Winston-Lutz analysis. The difference in the dots indicating the error.

Figure 10. Field size (A) and half beam block (B) analysis.

for new method analysis.

· HBB

Figure 10(B) has an analysis example of the new method. A positive central peak indicates a gap between jaws whereas a negative peak indicates overlap between jaws. There is no tolerance limit.

· MLC position

Figure 11 shows the plot for the MLC positions for the new method compared to the TPS target numbers seen as the center of each of the 8 plots. The plot axis is the tolerance limit. Only bank B is shown. Gantry angle is separated in the plot. The SD is reported for individual slides S(x), gantry angle σ1 and the sum of the gantry angles σ2. Slides 1, 2, 7 and 8 are cropped due to the collimator edges not allowing for accurate analysis. Gantry angle 270 tends to be more out of family. The patterns produced by gantry angle usually repeat from week to week.

· Star shot

Analysis has always been performed using software by FILMQATM PRO, Ashland, KY. Only the delivery and setup have changed for jaw and MLC star shots.

A difference in methodology would not allow a direct comparison for some of the QA listed in the results chart seen in Figure 12. The QA is organized in the

Figure 11. New method for MLC analysis.

Figure 12. Results chart with the comparison between old and new methods.

results chart based on whether a comparison is possible or not. Time saving was a possible comparison for all the QA mentioned except for dose rate, gantry, and MLC speed due to this being a new QA procedure.

The time saved for monthly QA was 130.6 minutes per Linac and 200.6 minutes for yearly QA. The yearly QA only has the addition of star shots added. The main contributors to time savings were MLC position and WL. The times were measured by doing the QA delivery with a stopwatch running. Data entry was also timed with stopwatch. Time was saved in all QA process reported.

Dose rate and gantry and MLC speed are a new QA procedure and have no comparison.

4. Discussions

Time savings and consistency are the most valuable aspects of QA automation. The old methods are very inefficient mostly due to MLC and WL tests (both performed weekly), changes to these tests accounted for 72% of the time savings for the monthly QA. Setup and execution are responsible for 82% of the time saving for the monthly test. Analysis and data entry contribute the remaining 18% eliminating the need to open spreadsheets and manually entering data. Mistakes from data entry could not be gauged, but still considered positive aspect of the new approach. The time saving reported can be very subjective due to user variation. The most experienced QA practitioner was used in the timing of each step. Automation can make the inexperienced user more efficient and results more accurate and consistent.

The old method required multiple separate phantom setups, measurements, and data entry, all of which were very inefficient. The vault had to be entered one or more times depending on the QA tasks. In the new method, the vault only needed to be entered once due to the four phantom single setup and several of the QA transitioning from phantom setup to no phantom setup. The convenience factor, along with the time savings, must be considered a positive contribution. The CBCT test does require the vault to be entered again for old or new methods so this does not impact set up time as seen in the results chart Figure 12.

XML scripts automated the delivery of each QA test. A single script could be written to deliver all the QA. The problem is that the XML script is linear in nature. If there is a modification to the script, then all the lines of code that follow the modification are also subject to change and hence errors as well as immense labor expense. An XML script generator is a possible solution but the efforts currently out way the benefits. In our practice scripts are kept as individuals even though a couple was combined before the difficulties were fully realized. After a script is run and the images generated and exported to specific folders, the next script is run. This keeps any of the images from being confused with images from a different QA task. Currently image auto export from Varian’s Developer Mode is not an option. Loading each script to run only takes a few seconds so having individual scripts is not a time issue. The order in which the scripts are run has an added advantage to reduce rotation time of the Linac having to move to a new setup position.

Non-phantom QA has fewer drawbacks compared to phantom QA. There are no hard-coded positions in the XML scripts other than panel and gantry positions which are not subject to change. This allows one script to be used for multiple linacs. The LR test requires a hard-coded table vertical position for the light field edges to match the phantoms 10 cm × 10 cm geometry. Leeds phantom setup is similar and requires a hard-coded vertical position as well for the spatial measurement of the rectangle. This is necessary for the projection to be calculated accurately. Isocentricity is the most demanding QA task in the new method since the table needs to be exactly positioned in 3 dimensions. Tablemotion is also hard coded for table position in 3 dimensions too but is much more forgiving due to it being a relative measurement. It only needs to be close enough since the phantom edges are detected first and then the fiducial position is approximately known as measured from the edge.

Base lines for these positions were collected and then checked again 6 months later, the differences were found to be within 0.02 cm which can be explained as setup uncertainty. A new baseline is collected, and the XML scripts modified after a new table calibration is performed.

Time saving of the new method compared to the old method is significant. However, it is not always true for accuracy and consistency of the measurement. No improvements are found for LR coincidence and FS measurements with the new method. A close look at the data measured and entered manually for the old method clearly indicated a tendency to record no error or a zero-decimal place. FS can have a fuzzy edge when viewed on graph paper or a ruler, which can lead the user to report a close approximation as no error. The FS is also calibrated using the old method with graph paper and not with the script which could account for better accuracy with the old method. LR is measured and marked on film, again with fuzzy edges and pen marks for the light edge that can be approximated as being on target. The script always records two decimal places and is rarely with no error. The SD and the error measured by the script is well within tolerances.

The analysis script for QA test with a phantom had to allow for more variation due to setup error compared to the non-phantom scripts. For isocentricity and table motion tests a fiducial center must be identified within the P2 phantom. To avoid a false fiducial identification, pixel values above and below certain threshold values (based on the average background values) are replaced. False gradients are also an issue, so a gradient threshold value is used that represents a true fiducial edge based on an average that is determined by sampling many images. False positives (passing results that did not find the fiducial center) are a possibility. To avoid this the images are displayed for review. We want to point out that compared to MV images, kV images have better contrast and sharper edges allowing for easier detection by the script when using a gradient detection method.

MV imaging quality is determined by using phantom P3. This phantom is indexed and does not have any alignment issues. Some of the wider peaks (see Figure 6) could have double peaks that smoothing does not always eliminate. The peak separation is known and if a found peak is not within a threshold modulo it is ignored. These three qualifiers mentioned above are enough to eliminate false peaks. This procedure is necessary for the 6× MV beam. For 2.5× MV beam, the peak detection is much more reliable thus the above procedure may not be as necessary but is left in place as a precaution.

KV image quality is checked with phantom P4. A new SD average was determined as the image includes the table.

CBCT analysis by MatLab is the most recent addition to our QA program. The CBCT QA is performed in Varian’s service mode and is quick to export. The HU contrast and HU uniformity measurement did not offer any challenges due to the simplicity of the measurement. The resolution test had the challenge of detecting the 6th section of the line pairs. There is very little smoothing of the profile involved due to the small separation of the 6th line pairs (0.083 cm separation). Three to five lines of the 6th line pair are usually detected. Three separate slices are used for the calculation. Each slice is measured individually and the slice with the largest number of 6th line pair peaks found is reported. In some cases when the count is the same, the profiles are averaged, and that result is used. The results chart list zero error and SD deviation due to 6-line pairs always being recorded with either old or new methods. Low contrast circles were found as described in the Leeds section above for contrast circles. The tolerance is the 6th supra-slice 1.0% which has been easily detected. The average is the 7th supra-slice 1.0% with both the old and new method. The maximum is 9 leaving an error of 2. Slice thickness is measured with two methods using three slices and the best result reported. Slice thickness measurement with the old method in Varian’s OLR could be unreliable as they showed a great deal of variance. The profile shape usually indicated the reliability of the measurement. The same was observed in the new method. The two methods in the MatLab script utilized a FWHM measurement and a gradient method described previously. Three slices are measured giving six results. The result closest to target is reported.

The four phantom setup is unique in its design. The phantoms used were the phantoms currently under use. This allowed for better statistical comparison. No additional equipment or software was purchased. Essentially all the cost involved were labor. There are marketed automated QA products that could be purchased that are quite expensive and were not an option. The goal was to produce an inhouse product that would be efficient, accurate and save time through automation. This paper has demonstrated the accomplishment of the goal through Varian’s Developer Mode XML scripting and MatLab scripting.

A future area of work could be a single phantom that can do-it-all increasing efficiency even more. There are currently no plans to pursue this.

5. Conclusion

Automated QA delivery and analysis has been shown to be accurate, sensitive, and efficient. Not all QA can benefit from automation and needs to be evaluated before automation should begin.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Shalek, R.J. (1977) Determination of Absorbed Dose in a Patient Irradiated by Beams of X or Gamma Rays in Radiotherapy Procedures. Medical Physics, 4, 461.
https://doi.org/10.1118/1.594356
[2] Kutcher, G.J., Coia, L., Gillin, M., et al. (1994) Report of AAPM TG 40, Comprehensive QA for Radiation Oncology. Medical Physics, 21, 581-618.
https://doi.org/10.1118/1.597316
[3] Klein, E.E., Hanley, J., Bayouth, J., et al. (2009) Task Group 142 Report: Quality Assurance of Medical Acceleratorsa. Medical Physics, 36, 4197-4212.
https://doi.org/10.1118/1.3190392
[4] Smith, K., Balter, P., Duhon, J., et al. (2017) AAPM Medical Physics Practice Guideline 8. a.: Linear Accelerator Performance Tests. Journal of Applied Clinical Medical Physics, 18, 23-39.
https://doi.org/10.1002/acm2.12080
[5] Li, Y., Chen, L., Zhu, J., Wang, B. and Liu, X. (2017) A Quantitative Method to the Analysis of MLC Leaf Position and Speed Based on EPID and EBT 3 Film for Dynamic IMRT Treatment with Different Types of MLC. Journal of Applied Clinical Medical Physics, 18, 106-115.
https://doi.org/10.1002/acm2.12102
[6] Chang, J., Obcemea, C., Sillanpaa, J., Mechalakos, J. and Burman, C. (2004) Use of EPID for Leaf Position Accuracy QA of Dynamic Multi-Leaf Collimator (DMLC) Treatment. Medical Physics, 31, 2091-2096.
https://doi.org/10.1118/1.1760187
[7] Njeh, C.F., Caroprese, B. and Desai, P. (2012) A Simple Quality Assurance Test Tool for the Visual Verification of Light and Radiation Field Congruent Using Electronic Portal Images Device and Computed Radiography. Radiation Oncology, 7, 49.
https://doi.org/10.1186/1748-717X-7-49
[8] Das, I.J., Cao, M., Cheng, C.-W., et al. (2011) A Quality Assurance Phantom for Electronic Portal Imaging Devices. Journal of Applied Clinical Medical Physics, 12, 391-403.
https://doi.org/10.1120/jacmp.v12i2.3350
[9] Sun, B., Goddu, S.M., Yaddanapudi, S., et al. (2015) Daily QA of Linear Accelerators Using Only EPID and OBI. Medical Physics, 42, 5584-5594.
https://doi.org/10.1118/1.4929550
[10] Valdes, G., Morin, O., Valenciaga, Y., Kirby, N., Pouliot, J. and Chuang, C. (2015) Use of TrueBeam Developer Mode for Imaging QA. Journal of Applied Clinical Medical Physics, 16, 322-333.
https://doi.org/10.1120/jacmp.v16i4.5363

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.