<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">EJ-AI</journal-id>
<journal-id journal-id-type="nlm-ta">EJ-AI</journal-id>
<journal-id journal-id-type="publisher-id">EJ-AI</journal-id>
<journal-title-group>
<journal-title>European Journal of Artificial Intelligence and Machine Learning</journal-title>
</journal-title-group>
<issn pub-type="epub">2796-0072</issn>
<publisher>
<publisher-name>European Open Science</publisher-name>
<publisher-loc>UK</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">70120</article-id>
<article-id pub-id-type="doi">10.24018/ejai.2026.5.2.70120</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Adapted Deep Key Generation Using Fourier&#x2013;Riesz Features for Secure Video Encryption</article-title>
<alt-title alt-title-type="left-running-head">Adapted Deep Key Generation Using Fourier&#x2013;Riesz Features for Secure Video Encryption</alt-title>
<alt-title alt-title-type="right-running-head">Rajaosolomanantena <italic>et al</italic>.</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0009-0004-4298-8173</contrib-id><name name-style="western"><surname>Rajaosolomanantena</surname> <given-names>Haingonirina Ignace</given-names></name><email>rhignace@gmail.com</email></contrib>
<contrib id="author-2" contrib-type="author"><name name-style="western"><surname>Ravaliminoarimalalason</surname> <given-names>Toky Basilide</given-names></name></contrib>
<contrib id="author-3" contrib-type="author"><name name-style="western"><surname>Andriamanohisoa</surname> <given-names>Hery Zo</given-names></name></contrib>
<aff><institution>Ecole Doctorale en Sciences et Techniques de l&#x2019;Ing&#x00E9;nierie et de l&#x2019;Innovation (ED STII), Ecole Sup&#x00E9;rieure Polytechnique, Laboratory of Cognitive Sciences and Applications, University of Antananarivo</institution>, <country country="MG">Madagascar</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label><bold><italic>Corresponding Author:</italic></bold> e-mail: <email>rhignace@gmail.com</email></corresp>
<fn fn-type="other"><p>The authors declare that they do not have any conflict of interest.</p></fn>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2026</year>
</pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>16</day>
<month>4</month>
<year>2026</year>
</pub-date>
<volume>5</volume>
<issue>2</issue>
<fpage>19</fpage>
<lpage>27</lpage>
<history>
<date date-type="received">
<day>26</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>4</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2026 Rajaosolomanantena et al.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Rajaosolomanantena et al.</copyright-holder>
<license>
<ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">https://creativecommons.org/licenses/by-nc-sa/4.0/</ali:license_ref>
<license-p>This work is under a <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by-nc-sa/4.0/">&#x00A0;Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</ext-link>.</license-p></license>
</permissions>
<self-uri xlink:title="pdf" content-type="pdf" xlink:href="EJ-AI_70120.pdf"></self-uri>
<abstract abstract-type="summary">
<p>Video encryption protects multimedia data over insecure networks. This paper introduces a hybrid key-generation framework combining Fourier&#x2013;Riesz features with an adapted deep neural model to produce dynamic, frame-dependent keys. A four-channel representation integrating spectral magnitude, spectral phase, directional amplitude, and orientation ensures key decorrelation. Experiments conducted on standard video datasets showed entropy values ranging between 7.96 and 7.99 bits, a strong avalanche effect with an average Hamming distance of 129.62, near-zero inter-frame and inter-channel correlations, and preserved visual quality with a PSNR of 42 dB. Security analysis confirmed overall robustness through extensive evaluations.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Deep learning</kwd>
<kwd>fourier transform</kwd>
<kwd>key generation</kwd>
<kwd>riesz transform</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1.</label>
<title>Introduction</title>
<p>The rapid growth of video-based applications has made multimedia security a critical challenge, as traditional encryption algorithms such as AES, DES, and RSA remain computationally costly for large-scale video data [<xref ref-type="bibr" rid="ref-1">1</xref>], [<xref ref-type="bibr" rid="ref-2">2</xref>]. Faster alternatives, including selective and chaos-based encryption, improve efficiency but often suffer from reduced robustness and exploitable vulnerabilities [<xref ref-type="bibr" rid="ref-3">3</xref>], [<xref ref-type="bibr" rid="ref-4">4</xref>]. To overcome these limitations, adaptive deep learning&#x2013;based key generation has emerged as an effective solution by exploiting content-dependent characteristics [<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<p>This work proposes a hybrid video encryption framework that integrates Fourier and Riesz transforms within a deep neural architecture to capture both spectral and directional information [<xref ref-type="bibr" rid="ref-6">6</xref>]. A spectro-directional tensor is processed by an orthogonally constrained network, ensuring key stability, decorrelation, and numerical robustness [<xref ref-type="bibr" rid="ref-7">7</xref>]. Hybrid activation, adaptive training, and Jacobian control enable high entropy, strong avalanche effects, and inter-frame independence. Extensive experiments confirm the effectiveness of the proposed framework in achieving secure, robust, and high-quality video encryption.</p>
</sec>
<sec id="s2">
<label>2.</label>
<title>Literature Review</title>
<p>Recent studies show that deep learning is increasingly used for secret key generation from biometric data. Symmetric keys have been generated from fingerprint images using a VGG-16 network [<xref ref-type="bibr" rid="ref-8">8</xref>], while multimodal biometric fusion combining face and finger-vein features with FaceNet, VGG19, and siamese architectures has been used to derive stable keys [<xref ref-type="bibr" rid="ref-9">9</xref>]. Post-quantum compatible keys based on facial CNNs and code-based extractors were proposed in [<xref ref-type="bibr" rid="ref-10">10</xref>], and high-entropy fingerprint-based keys using CNNs with Particle Swarm Optimization were introduced in [<xref ref-type="bibr" rid="ref-11">11</xref>]. In parallel, encryption keys derived from trinion Fourier transforms driven by chaotic systems were presented in [<xref ref-type="bibr" rid="ref-12">12</xref>], without deep learning or temporal adaptation. Unlike these static approaches, the present work focuses on dynamic video sequences using a temporally adaptive Fourier&#x2013;Riesz deep model for content-dependent key generation.</p>
</sec>
<sec id="s3">
<label>3.</label>
<title>Materials and Methods</title>
<sec id="s3_1">
<label>3.1.</label>
<title>Video Datasets</title>
<p>For experimental evaluation, the Akiyo sequence from the <ext-link ext-link-type="uri" xlink:href="https://Xiph.org">&#x00A0;Xiph.org</ext-link> Video Test Media Repository, a standard YUV video collection, was used as a reference, comprising 300 frames [<xref ref-type="bibr" rid="ref-13">13</xref>]. Each frame, representing both static and dynamic scenes, was extracted and resized to 128 &#x00D7; 128 pixels before feature extraction.</p>
</sec>
<sec id="s3_2">
<label>3.2.</label>
<title>Hardware and Software Environment</title>
<p>Experiments were conducted on a Windows 10 platform using Python 3.10 with TensorFlow/Keras. Training was performed on a system equipped with an 8-core CPU, 16 GB RAM, and an NVIDIA GPU with 8 GB VRAM, enabling efficient tensor processing and accelerated optimization.</p>
</sec>
<sec id="s3_3">
<label>3.3.</label>
<title>Feature Extraction</title>
<p>In the proposed pipeline, four complementary features are extracted from each video frame in order to build a compact yet expressive spectro-directional representation. More precisely, we derive a spectral magnitude map <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:msub><mml:mi>M</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, a spectral phase component <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msub><mml:mi mathvariant="normal">&#x03A6;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, a Riesz-based directional response <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and a temporal variation map <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msub><mml:mi mathvariant="normal">&#x0398;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. These extracted features form a compact structure that enables the generation of content-dependent keys.</p>
</sec>
<sec id="s3_4">
<label>3.4.</label>
<title>Network Architecture</title>
<sec id="s3_4_1">
<label>3.4.1.</label>
<title>Adapted Dense Layer</title>
<p>We define a tensor-dependent orthogonal projection based on <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mover><mml:mi mathvariant="normal">&#x03A8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>:</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mi>x</mml:mi><mml:mo>+</mml:mo><mml:mi>b</mml:mi><mml:mo>,</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mi>O</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is an orthogonal matrix dependent on the tensor <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> belonging to the set <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>O</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> of orthogonal matrices of dimension <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>n</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>b</mml:mi></mml:math></inline-formula> is a bias term.</p>
<p>Here, the matrix <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is regularized to remain orthogonal:</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
</sec>
<sec id="s3_4_2">
<label>3.4.2.</label>
<title>Activation Function</title>
<p>The proposed activation function is not fixed but depends on the spectral-directional features:</p>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03C6;</mml:mi><mml:mrow><mml:mi>H</mml:mi><mml:mi>y</mml:mi><mml:mi>b</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>d</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>L</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>L</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> denotes the Rectified linear function, <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> denotes the sigmoid function, and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mi>&#x03B1;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is a coefficient dynamically computed from <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
</sec>
<sec id="s3_4_3">
<label>3.4.3.</label>
<title>Jacobian</title>
<p>The input-output Jacobian, as mentioned in [<xref ref-type="bibr" rid="ref-14">14</xref>], factorizes layer by layer:</p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:msubsup><mml:mo movablelimits="false">&#x220F;</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo><mml:mi>D</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi><mml:mi>g</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>&#x03C6;</mml:mi><mml:mrow><mml:mi>h</mml:mi><mml:mi>y</mml:mi><mml:mi>b</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>d</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the encryption key at frame <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mi>t</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the normalized spectral-directional tensor, <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the weight matrix of layer <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>l</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msubsup><mml:mi>&#x03C6;</mml:mi><mml:mrow><mml:mi>h</mml:mi><mml:mi>y</mml:mi><mml:mi>b</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>d</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the derivative of the hybrid activation function and <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the pre-activation at layer <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>l</mml:mi></mml:math></inline-formula>.</p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is an adaptive orthogonal matrix dependent on the tensor <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>l</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> represents the outputs of the previous layer, and <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the bias vector of layer <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>l</mml:mi></mml:math></inline-formula>.</p>
<p>Owing to the bounded nature of the hybrid activation derivative, we impose:</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2264;</mml:mo><mml:msubsup><mml:mi>&#x03C6;</mml:mi><mml:mrow><mml:mi>h</mml:mi><mml:mi>y</mml:mi><mml:mi>b</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>d</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>Z</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:msub><mml:mover><mml:mi>S</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>Ensuring that the Frobenius norm <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> remains controlled. The lower bound <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> prevents degenerate mapping with vanishing Jacobian norm, while the upper bound <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:msub><mml:mover><mml:mi>S</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> avoids gradient explosion.</p>
</sec>
</sec>
<sec id="s3_5">
<label>3.5.</label>
<title>Training Procedure</title>
<sec id="s3_5_1">
<label>3.5.1.</label>
<title>Training Hyperparameters and Configuration</title>
<p>The network was trained using gradient descent with an adaptive learning rate initialized at &#x03B7;<sub>0</sub> &#x003D; 10<sup>&#x2212;3</sup> and dynamically modulated according to the spectral energy of the input tensor. The number of epochs was set to E &#x003D; 50, as the proposed orthogonally constrained and Jacobian-regularized architecture exhibited rapid convergence. No mini-batching was used, as training was performed sequentially frame by frame. The composite loss weights were empirically fixed as follows: orthogonality penalty &#x03BB;<sub>1</sub> &#x003D; 0.1, inter-frame decorrelation &#x03BB;<sub>2</sub> &#x003D; 0.5, and Jacobian margin constraint &#x03BB;<sub>3</sub> &#x003D; 0.2.</p>
</sec>
<sec id="s3_5_2">
<label>3.5.2.</label>
<title>Frame-Wise Local and Adaptive Learning</title>
<p>For each frame <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>t</mml:mi></mml:math></inline-formula>, the network parameters <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> are locally updated as:</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi mathvariant="normal">&#x03B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="normal">&#x03B8;</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the deep neural network parameterized by <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, applied to the normalized tensor <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the normalized input tensor at time <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mi>t</mml:mi></mml:math></inline-formula>; <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the frame at time <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mi>t</mml:mi></mml:math></inline-formula>; <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:msub><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula><italic>(&#x22C5;,&#x22C5;)</italic> denotes the unsupervised loss function; <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the gradient of the loss with respect to the parameters <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mrow><mml:mi mathvariant="normal">&#x03B8;</mml:mi></mml:mrow></mml:math></inline-formula>; and <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the learning rate at iteration <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p>
<p>The learning rate <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> was itself modulated by the spectral energy of the tensor:</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>h</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the normalized input tensor at time <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mi>t</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> denotes the spectral energy of <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
<p>As a result, the update becomes self-adaptive: each frame adjusts the learning rate according to its spectral content.</p>
</sec>
<sec id="s3_5_3">
<label>3.5.3.</label>
<title>Loss Function</title>
<p>The unsupervised objective <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:msub><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> enforces orthogonality, temporal decorrelation, and minimum sensitivity.</p>
<p>In accordance with <xref ref-type="disp-formula" rid="eqn-1">(1)</xref>, the dense layer is parameterised by a square orthogonal matrix conditioned on the spectral&#x2013;directional tensor, denoted <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>. After each gradient update, we enforce orthogonality by reprojecting the updated matrix onto the orthogonal manifold using a QR factorisation and retaining the Q factor, such that:</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>To further stabilise optimisation, we add a soft orthogonality penalty:</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mo>&#x22C5;</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>I</mml:mi><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>To enforce inter-frame independence, we minimize the squared Pearson correlation between successive keys:</p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>r</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where corr(&#x00B7;,&#x00B7;) is the Pearson correlation on vectorized keys, averaged over the mini-batch.</p>
<p>Finally, we enforce a minimum Jacobian norm to promote the avalanche effect:</p>
<p><disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mo symmetric="true">&#x2016;</mml:mo><mml:msub><mml:mi>J</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo symmetric="true">&#x2016;</mml:mo></mml:mrow><mml:mrow><mml:mi>F</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mrow><mml:msub><mml:mi>J</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:mfrac></mml:mstyle></mml:math></inline-formula> denotes the Jacobian of the key with respect to the normalised input tensor.</p>
<p>The overall loss is then given by the weighted sum:</p>
<p><disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
</sec>
</sec>
<sec id="s3_6">
<label>3.6.</label>
<title>Key Generation</title>
<p>The normalized tensor <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:msub><mml:mover><mml:mrow><mml:mi mathvariant="normal">&#x03C8;</mml:mi></mml:mrow><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mi>&#x03F5;</mml:mi><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> is used as input to an adapted designed deep neural network <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> with the following configuration.</p>
<p>The network takes <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> as input and outputs an encryption key in three channels Red, Green and Blue (RGB):</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mover><mml:mrow><mml:mi mathvariant="normal">&#x03C8;</mml:mi></mml:mrow><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mi>&#x03F5;</mml:mi><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the deep neural network parameterized by <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> applied to the normalized tensor <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> represents the three-dimensional real space of dimensions <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula>.</p>
<p>The network generates a continuous RGB encryption key that is scaled to an 8-bit integer array; therefore, the channel index <italic>c</italic> <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mi>&#x03F5;</mml:mi><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula> is introduced in the following formula:</p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>&#x230A;</mml:mo><mml:mn>225</mml:mn><mml:mo>&#x22C5;</mml:mo><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x230B;</mml:mo></mml:mrow><mml:mi>&#x03F5;</mml:mi><mml:msub><mml:mrow><mml:mi mathvariant="double-struck">Z</mml:mi></mml:mrow><mml:mrow><mml:mn>256</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>for c <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mi>&#x03F5;</mml:mi><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula></p>
<p>where <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> represents the continuous encryption key generated by the deep neural network, <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mi>c</mml:mi></mml:math></inline-formula> denotes the <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi></mml:math></inline-formula> or <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>B</mml:mi></mml:math></inline-formula> channel, and <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:msub><mml:mrow><mml:mi mathvariant="double-struck">Z</mml:mi></mml:mrow><mml:mrow><mml:mn>256</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> denotes the set of integers from 0 to 255.</p>
</sec>
<sec id="s3_7">
<label>3.7.</label>
<title>Video Encryption</title>
<p>The XOR-based encryption is then performed as:</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:msubsup><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mi>G</mml:mi><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><disp-formula id="ueqn-1"><mml:math id="mml-ueqn-1" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mi mathvariant="normal">&#x2200;</mml:mi><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow><mml:mi>&#x03F5;</mml:mi><mml:mrow><mml:mo>{</mml:mo><mml:mi>R</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:msubsup><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mi>G</mml:mi><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula>(x, y, c) represents the original pixel for channel <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mi>c</mml:mi></mml:math></inline-formula>, and &#x2295; denotes the XOR encryption operation with the key <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>&#x2032;</mml:mo></mml:msubsup></mml:math></inline-formula>, with <italic>c</italic> denoting the R, G, or B channel.</p>
</sec>
<sec id="s3_8">
<label>3.8.</label>
<title>Video Decryption</title>
<p>As mentioned in [<xref ref-type="bibr" rid="ref-15">15</xref>] regarding XOR properties, if a pixel <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi>f</mml:mi></mml:math></inline-formula> has been encrypted with a key <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mi>k</mml:mi></mml:math></inline-formula>, the original can be recovered by:</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mtable columnalign="right center left" rowspacing="3pt" columnspacing="0 thickmathspace" displaystyle="true"><mml:mtr><mml:mtd><mml:mrow><mml:mover><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo>&#x2032;</mml:mo></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>where <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:msubsup><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula>(x, y, c) denotes the encrypted pixel at time <italic>t</italic> and channel c, <inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> refers to the key, identical to the one used during encryption, and <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mrow><mml:mover><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>;</mml:mo></mml:math></inline-formula>represents the decrypted pixel [<xref ref-type="bibr" rid="ref-16">16</xref>].</p>
</sec>
<sec id="s3_9">
<label>3.9.</label>
<title>Algorithm</title>
<fig id="fig-10">
<graphic mimetype="image" mime-subtype="tif" xlink:href="EJ-AI_70120-fig-10.png"><alt-text>Images</alt-text></graphic>
</fig>
</sec>
<sec id="s3_10">
<label>3.10.</label>
<title>Threat Model and Security Properties</title>
<p>Security was evaluated under standard threat models, including Ciphertext-Only Attack, Known-Plaintext Attack, and Chosen-Plaintext Attack, in accordance with Kerckhoffs&#x2019; principle. The spectro-directional deep key generator produced frame-wise, content-dependent keys, preventing key reuse and minimizing temporal correlations. Adaptive learning and hybrid activation introduced strong nonlinearity, while Jacobian-constrained training and orthogonality ensured high entropy, avalanche effect, and statistical independence. Consequently, the framework provided robust security despite the use of XOR-based encryption.</p>
</sec>
</sec>
<sec id="s4">
<label>4.</label>
<title>Results</title>
<p>This section presents results on key quality and their effect on the security and robustness of 3D data encryption [<xref ref-type="bibr" rid="ref-17">17</xref>], [<xref ref-type="bibr" rid="ref-18">18</xref>].</p>
<sec id="s4_1">
<label>4.1.</label>
<title>Analysis and Evaluation of Generated Keys</title>
<p>Key quality and security were assessed using metrics for randomness, uniqueness, and robustness [<xref ref-type="bibr" rid="ref-19">19</xref>].</p>
<sec id="s4_1_1">
<label>4.1.1.</label>
<title>Key Entropy</title>
<p><xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows the cumulative distribution of key entropy, illustrating the keys&#x2019; uniformity.</p>
<fig id="fig-1">
<label>Fig. 1</label>
<caption>
<title>Cumulative distribution function of key entropy.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-1.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>The generated keys exhibited a mean entropy of 7.67 bits per byte, with a 95% confidence interval that remained above weak-randomness thresholds, thereby confirming strong statistical randomness and cryptographic suitability [<xref ref-type="bibr" rid="ref-20">20</xref>].</p>
</sec>
<sec id="s4_1_2">
<label>4.1.2.</label>
<title>Avalanche Effect</title>
<p><xref ref-type="fig" rid="fig-2">Fig. 2</xref> shows the avalanche effect, where small input changes greatly alter the generated key [<xref ref-type="bibr" rid="ref-21">21</xref>].</p>
<fig id="fig-2">
<label>Fig. 2</label>
<caption>
<title>Distribution of the avalanche effect (Hamming distances between 256-bit keys).</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-2.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>The mean Hamming distance of 129.62 bits with a 95% confidence interval from 129.11 to 130.13 confirmed a balanced avalanche effect, while a one-sample t-test against 128 bits yielded p &#x003C; 0.001, and the range 116&#x2013;143 bits demonstrated strong diffusion and resistance to differential attacks [<xref ref-type="bibr" rid="ref-22">22</xref>], [<xref ref-type="bibr" rid="ref-23">23</xref>].</p>
</sec>
<sec id="s4_1_3">
<label>4.1.3.</label>
<title>Inter-Frame and Inter-Channel Independence</title>
<p>The following analysis, as <xref ref-type="fig" rid="fig-3">Fig. 3</xref> illustrates, evaluates the independence of keys across frames and color channels to ensure high variability and prevent redundancy [<xref ref-type="bibr" rid="ref-24">24</xref>].</p>
<fig id="fig-3">
<label>Fig. 3</label>
<caption>
<title>Key independence between successive frames.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-3.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>Inter-frame correlations averaged 0.002 over 300 frames with a 95% confidence interval from &#x2212;0.013 to 0.017 and a p-value of 0.77, consistent with [<xref ref-type="bibr" rid="ref-25">25</xref>], confirming strong temporal independence between successive keys [<xref ref-type="bibr" rid="ref-26">26</xref>].</p>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the correlations between the R, G, and B channels of the generated keys, which are near zero, ranging from minus 0.04 to 0.01, indicating minimal redundancy and strong statistical independence.</p>
<fig id="fig-4">
<label>Fig. 4</label>
<caption>
<title>Key independence across color channels (R, G, B).</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-4.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>According to [<xref ref-type="bibr" rid="ref-27">27</xref>], as shown in <xref ref-type="table" rid="table-1">Table I</xref>, the inter-channel correlations were weak, with averages of 0.0129 for RG, &#x2212;0.045 for RB, and 0.0129 for GB over 20 frames, corresponding to 95% confidence intervals of [&#x2212;0.0648, 0.0906], [&#x2212;0.122, 0.032], and [&#x2212;0.0648, 0.0906], and p-values of 0.73, 0.23, and 0.73, respectively. These results confirm statistical independence and strong, non-redundant key variability [<xref ref-type="bibr" rid="ref-28">28</xref>].</p>
<table-wrap id="table-1">
<label>Table I</label>
<caption>
<title>Descriptive Statistics of Inter-Channel Correlations (R&#x2013;G, R&#x2013;B, G&#x2013;B) over 20 Frames</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th></th>
<th>Frame</th>
<th>R-G</th>
<th>R-B</th>
<th>G-B</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>N<sub>f</sub></td>
<td>20</td>
<td>20</td>
<td>20</td>
<td>20</td>
</tr>
<tr align="center">
<td>Mean</td>
<td>118.55</td>
<td>0.0129</td>
<td>&#x2212;0.045</td>
<td>0.0129</td>
</tr>
<tr align="center">
<td>Std</td>
<td>74.013</td>
<td>0.166</td>
<td>0.164</td>
<td>0.166</td>
</tr>
<tr align="center">
<td>Min</td>
<td>0.00</td>
<td>&#x2212;0.273</td>
<td>&#x2212;0.335</td>
<td>&#x2212;0.273</td>
</tr>
<tr align="center">
<td>Q<sub>1</sub></td>
<td>59.00</td>
<td>&#x2212;0.103</td>
<td>&#x2212;0.167</td>
<td>&#x2212;0.103</td>
</tr>
<tr align="center">
<td>Q<sub>2</sub></td>
<td>118.50</td>
<td>0.0175</td>
<td>&#x2212;0.032</td>
<td>0.017</td>
</tr>
<tr align="center">
<td>Q<sub>3</sub></td>
<td>178.00</td>
<td>0.126</td>
<td>0.071</td>
<td>0.126</td>
</tr>
<tr align="center">
<td>Max</td>
<td>238.00</td>
<td>0.363</td>
<td>0.285</td>
<td>0.363</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s4_2">
<label>4.2.</label>
<title>Evaluation of Video Encryption and Decryption Performance</title>
<sec id="s4_2_1">
<label>4.2.1.</label>
<title>Video Encryption and Decryption Results</title>
<p>Original and decrypted frames are visually compared in <xref ref-type="fig" rid="fig-5">Fig. 5</xref> to assess encryption fidelity [<xref ref-type="bibr" rid="ref-29">29</xref>].</p>
<fig id="fig-5">
<label>Fig. 5</label>
<caption>
<title>Comparison of (a) original, (b) encrypted, and (c) decrypted video frames.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-5.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>The Akiyo sequence (a) is unreadable after encryption (b) and fully restored after decryption (c), showing the effectiveness of the key generation method [<xref ref-type="bibr" rid="ref-30">30</xref>].</p>
</sec>
<sec id="s4_2_2">
<label>4.2.2.</label>
<title>Correlation between Adjacent Pixels</title>
<p>In accordance with [<xref ref-type="bibr" rid="ref-31">31</xref>], the proposed encryption, as depicted in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>, significantly reduced adjacent-pixel correlation to a negligible level, confirming key effectiveness.</p>
<fig id="fig-6">
<label>Fig. 6</label>
<caption>
<title>Correlation between adjacent pixels.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-6.png"><alt-text>Images</alt-text></graphic>
</fig>
<p>The original videos exhibited a pixel correlation of approximately 0.9, which dropped close to zero after encryption, confirming the effective removal of spatial redundancies.</p>
</sec>
<sec id="s4_2_3">
<label>4.2.3.</label>
<title>Validation of Key Effectiveness via Entropy and Directional Correlations</title>
<p><xref ref-type="table" rid="table-2">Tables II</xref> and <xref ref-type="table" rid="table-3">III</xref> presents the entropy and correlations before and after encryption to validate the effectiveness of the generated keys.</p>
<table-wrap id="table-2">
<label>Table II</label>
<caption>
<title>Entropy and Horizontal/Vertical Correlations</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th>Frame</th>
<th>Entropy (bits)</th>
<th>H-Corr (Orig)</th>
<th>H-Corr (Enc)</th>
<th>V-Corr (Orig)</th>
<th>V-Corr (Enc)</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>1</td>
<td>7.98</td>
<td>0.91</td>
<td>0.02</td>
<td>0.88</td>
<td>0.00</td>
</tr>
<tr align="center">
<td>2</td>
<td>7.97</td>
<td>0.92</td>
<td>0.03</td>
<td>0.89</td>
<td>&#x2212;0.01</td>
</tr>
<tr align="center">
<td>3</td>
<td>7.99</td>
<td>0.93</td>
<td>0.01</td>
<td>0.90</td>
<td>0.02</td>
</tr>
<tr align="center">
<td>4</td>
<td>7.96</td>
<td>0.90</td>
<td>0.00</td>
<td>0.87</td>
<td>0.01</td>
</tr>
<tr align="center">
<td>5</td>
<td>7.98</td>
<td>0.91</td>
<td>&#x2212;0.01</td>
<td>0.89</td>
<td>&#x2212;0.02</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-3">
<label>Table III</label>
<caption>
<title>Diagonal Correlations</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th>Frame</th>
<th>D-Corr (Orig)</th>
<th>D-Corr (Enc)</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>1</td>
<td>0.87</td>
<td>0.01</td>
</tr>
<tr align="center">
<td>2</td>
<td>0.88</td>
<td>0.00</td>
</tr>
<tr align="center">
<td>3</td>
<td>0.89</td>
<td>&#x2212;0.01</td>
</tr>
<tr align="center">
<td>4</td>
<td>0.86</td>
<td>0.02</td>
</tr>
<tr align="center">
<td>5</td>
<td>0.87</td>
<td>0.01</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>According to Ghouate [<xref ref-type="bibr" rid="ref-30">30</xref>], entropy near 8 bits ensures strong randomness, and our encrypted video reached between 7.96 and 7.99 bits with directional correlations averaging 0.006 over 5 frames with a 95% confidence interval from &#x2212;0.008 to 0.020, demonstrating effective key generation.</p>
</sec>
<sec id="s4_2_4">
<label>4.2.4.</label>
<title>Evaluation of System Robustness against Disturbances</title>
<p>Robustness was tested via PSNR after decrypting videos affected by noise, compression, or data loss.</p>
<p>Our keys provided robust encryption, <xref ref-type="table" rid="table-4">Table IV</xref> highlights that we achieved a PSNR of 33.8 dB over 300 frames with a 95% confidence interval from 33.76 to 33.84 dB, with PSNR of 35.7 dB for JPEG compression and 32 dB for data loss, ensuring reliable visual recovery [<xref ref-type="bibr" rid="ref-32">32</xref>].</p>
<table-wrap id="table-4">
<label>Table IV</label>
<caption>
<title>Evaluation of Encryption Robustness against Different Types of Attacks</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th>Type of attack</th>
<th>Average PSNR</th>
<th>Interpretation</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>Gaussian noise</td>
<td>&#x007E;33.8 dB</td>
<td>Minimal visual degradation; video remains usable.</td>
</tr>
<tr align="center">
<td>JPEG compression (Q &#x003D; 75%)</td>
<td>&#x007E;35.7 dB</td>
<td>The encryption withstands moderate compression well.</td>
</tr>
<tr align="center">
<td>Packet loss</td>
<td>&#x007E;32 dB</td>
<td>Good robustness; video remains intelligible.</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s4_3">
<label>4.3.</label>
<title>Comparison with the Existing Approaches</title>
<sec id="s4_3_1">
<label>4.3.1.</label>
<title>Comparative Evolution of Frame PSNR</title>
<p>As <xref ref-type="fig" rid="fig-7">Fig. 7</xref> reveals, the proposed method achieved a stable PSNR of 42 dB over 300 frames, with a 95% confidence interval of [41.94, 42.06] dB, thus ensuring high visual quality according to [<xref ref-type="bibr" rid="ref-33">33</xref>]. In contrast, Chaotic Maps, Scalable Video Coding (SVC), and Selective Video Encryption (H.264) exhibited lower PSNR values of 34.02 dB, 37.95 dB, and 30.09 dB, respectively, indicating greater visual loss [<xref ref-type="bibr" rid="ref-34">34</xref>]&#x2013;[<xref ref-type="bibr" rid="ref-37">37</xref>].</p>
<fig id="fig-7">
<label>Fig. 7</label>
<caption>
<title>PSNR performance of proposed and selected existing approaches.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-7.png"><alt-text>Images</alt-text></graphic>
</fig>
</sec>
</sec>
<sec id="s4_4">
<label>4.4.</label>
<title>Average SSIM Distribution</title>
<p><xref ref-type="fig" rid="fig-8">Fig. 8</xref> illustrates that the Proposed Method achieved an SSIM of 0.95 over 300 frames with a 95% confidence interval from 0.949 to 0.951, exceeding SVC at 0.92 [<xref ref-type="bibr" rid="ref-36">36</xref>] but slightly below the perfectly stable 1.0 reported in [<xref ref-type="bibr" rid="ref-38">38</xref>].</p>
<fig id="fig-8">
<label>Fig. 8</label>
<caption>
<title>Comparative evolution of frame SSIM: Proposed vs. existing methods.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-8.png"><alt-text>Images</alt-text></graphic>
</fig>
</sec>
<sec id="s4_5">
<label>4.5.</label>
<title>Encrypted Frame Entropy Evolution</title>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows that the Proposed Method achieved about 8-bit entropy [<xref ref-type="bibr" rid="ref-38">38</xref>], comparable to Coding Characteristics at 7.89 bits [<xref ref-type="bibr" rid="ref-39">39</xref>] and Block Scrambling at 8 bits.</p>
<fig id="fig-9">
<label>Fig. 9</label>
<caption>
<title>Comparative evolution of encrypted frame entropy for the proposed method, coding characteristics and block scrambling.</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="EJ-AI_70120-fig-9.png"><alt-text>Images</alt-text></graphic>
</fig>
</sec>
<sec id="s4_6">
<label>4.6.</label>
<title>Ablation Study on the Contribution of Model Components</title>
<p>The ablation results, which <xref ref-type="table" rid="table-5">Table V</xref> reveals, showed that combining Fourier and Riesz features with orthogonality, Jacobian adaptation, and hybrid ReLU/Sigmoid activation produces cryptographic keys of the highest quality.</p>
<table-wrap id="table-5">
<label>Table V</label>
<caption>
<title>Ablation Study Evaluating the Contribution of Each Component to Key Quality</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th>Config.</th>
<th>Feat.</th>
<th>Ortho.</th>
<th>Jac.</th>
<th>Activation</th>
<th>Frame-wise</th>
<th>Ent.</th>
<th>Corr.</th>
<th>Ham.</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>C<sub>1</sub></td>
<td>F</td>
<td>No</td>
<td>No</td>
<td>ReLU</td>
<td>No</td>
<td>7.65</td>
<td>0.12</td>
<td>121.4</td>
</tr>
<tr align="center">
<td>C<sub>2</sub></td>
<td>R</td>
<td>No</td>
<td>No</td>
<td>Sigmoid</td>
<td>No</td>
<td>7.62</td>
<td>0.15</td>
<td>119.8</td>
</tr>
<tr align="center">
<td>C<sub>3</sub></td>
<td>F &#x002B; R</td>
<td>No</td>
<td>No</td>
<td>Tanh</td>
<td>No</td>
<td>7.71</td>
<td>0.07</td>
<td>124.6</td>
</tr>
<tr align="center">
<td>C<sub>4</sub></td>
<td>F &#x002B; R</td>
<td>Yes</td>
<td>No</td>
<td>ReLU/Sigmoid</td>
<td>Partial</td>
<td>7.79</td>
<td>0.03</td>
<td>127.1</td>
</tr>
<tr align="center">
<td>C<sub>5</sub> (proposed)</td>
<td>F &#x002B; R</td>
<td>Yes</td>
<td>Yes</td>
<td>ReLU/Sigmoid</td>
<td>Yes</td>
<td>7.96</td>
<td>&#x2248;0.00</td>
<td>129.6</td>
</tr>
</tbody>
</table>
<table-wrap-foot><fn id="table-5fn1" fn-type="other">
<p>Note: Where Feat.: Features; Ortho.: Orthogonality; Jac.: Jacobian; Ent.: Entropy; Corr.: Correlation; Ham.: Hamming; F: Fourier features; R: Riesz features; F&#x002B;R: combined Fourier and Riesz features.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s4_7">
<label>4.7.</label>
<title>Security Validation via Neural Discriminator Attack</title>
<p>A neural discriminator trained on 70% of 300 frames over 50 epochs achieved 49.8% &#x00B1; 1.2% accuracy, showing encrypted frames are statistically indistinguishable from noise and confirming robustness against neural attacks.</p>
</sec>
<sec id="s4_8">
<label>4.8.</label>
<title>Computational Performance Analysis</title>
<p>Training took 2.3 min on GPU and 9.8 min on CPU for 50 epochs, with an average inference time of 6.4 ms (GPU) and 28.7 ms (CPU), demonstrating near real-time processing. The theoretical complexity <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:mi>O</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>E</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>T</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>N</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>g</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>L</mml:mi><mml:msup><mml:mi>d</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>d</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> includes <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:msup><mml:mi>N</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>g</mml:mi><mml:mi>N</mml:mi></mml:math></inline-formula> for FFT extraction, <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>L</mml:mi><mml:msup><mml:mi>d</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for forward/backpropagation, and <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:msup><mml:mi>d</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for QR factorization, and remains manageable due to GPU parallelization.</p>
</sec>
</sec>
<sec id="s5">
<label>5.</label>
<title>Discussion</title>
<p>The proposed method achieved an average key entropy of 7.67 bits, confirming proximity to the theoretical optimum of 8 bits and indicating strong randomness as well as resistance to statistical attacks, as reported in [<xref ref-type="bibr" rid="ref-40">40</xref>]. The avalanche effect produced an average Hamming distance of 129.62 bits with a 95% confidence interval ranging from 129.11 to 130.13 and a p-value below 0.001, thereby satisfying the strict diffusion criteria established in [<xref ref-type="bibr" rid="ref-41">41</xref>]. Adjacent-pixel correlations decreased from values close to 0.9 to statistically indistinguishable values from zero, in accordance with [<xref ref-type="bibr" rid="ref-40">40</xref>], [<xref ref-type="bibr" rid="ref-42">42</xref>]. Ablation analysis showed that configuration C<sub>5</sub> offered the best trade-off between entropy maximization, decorrelation efficiency, and nonlinear sensitivity, consistent with [<xref ref-type="bibr" rid="ref-42">42</xref>], [<xref ref-type="bibr" rid="ref-43">43</xref>].</p>
<p>The decrypted sequences achieved a PSNR close to 42 dB and an SSIM value around 0.95, with confidence intervals confirming stability across all frames. These results demonstrate the superiority of the method over selective encryption approaches described in [<xref ref-type="bibr" rid="ref-39">39</xref>] and chaos-based schemes presented in [<xref ref-type="bibr" rid="ref-44">44</xref>], while maintaining robustness against noise, compression, and packet loss, as shown in [<xref ref-type="bibr" rid="ref-40">40</xref>], [<xref ref-type="bibr" rid="ref-43">43</xref>], [<xref ref-type="bibr" rid="ref-44">44</xref>].</p>
<p>Although the deep Fourier&#x2013;Riesz framework introduced additional computational cost associated with feature extraction and constrained optimization, and requires hardware acceleration for strict real-time deployment, it offers a favorable trade-off between security, reconstruction fidelity, and statistical stability.</p>
<p>The method also demonstrates resilience against model-extraction attacks, as the neural discriminator failed to recover exploitable patterns, and it limits side-channel leakage through entropy-preserving transformations. Remaining limitations concern computational load and sensitivity to training diversity.</p>
</sec>
<sec id="s6">
<label>6.</label>
<title>Conclusion</title>
<p>This research presented a novel framework for dynamic and adaptive key generation, leveraging Fourier&#x2013;Riesz features combined with deep learning. The approach produces high-entropy, decorrelated, and robust keys, ensuring strong cryptographic properties for videos. Experimental results demonstrated that deep spectro-directional features effectively capture temporal and spatial variations, providing robust and independent keys for each frame. Future work will focus on optimizing the key generation process, integrating the framework into modern codecs such as High Efficiency Video Coding (H.265/HEVC), evaluating performance on high-resolution video sequences, exploring alternative spectro-directional transformations, and developing adaptive mechanisms to enhance robustness and scalability in dynamic video scenarios.</p>
</sec>
</body>
<back>
<app-group>
<app id="app-1">
<title>Appendix</title>
<table-wrap id="table-6">
<label>Table VI</label>
<caption>
<title>Notation Summary</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr align="center">
<th>Symbol</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr align="center">
<td>&#x03A9;</td>
<td>Orthogonal matrix</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:msub><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Spectro-directional tensor</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Deep neural network</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Orthogonality penalty</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Diversity penalty</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>j</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Jacobian regularization</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></td>
<td>Continuous encryption key</td>
</tr>
<tr align="center">
<td><inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2032;</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula></td>
<td>Generated key</td>
</tr>
</tbody>
</table>
</table-wrap>
</app>
</app-group>
<sec id="s7">
<title>Code Availability Statement</title>
<p>The code used in this research is available from the corresponding author upon reasonable request.</p>
</sec>

<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Shahid</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Chaumont</surname> <given-names>M</given-names></string-name>, <string-name><surname>Puech</surname> <given-names>W</given-names></string-name></person-group>. <article-title>Fast protection of H.264/AVC by selective encryption of CAVLC and CABAC for I and P frames</article-title>. <source>IEEE Trans Circ Syst Video Technol</source>. <year>2011</year>;<volume>21</volume>(<issue>5</issue>):<comment>565&#x2013;76</comment>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>S</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>G</given-names></string-name>, <string-name><surname>Zheng</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Chaos-based encryption for digital images and videos</article-title>. <source>Chaos Solitons Fractals</source>. <year>2004</year>;<volume>22</volume>(<issue>2</issue>):<comment>341&#x2013;61</comment>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Lian</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Multimedia content encryption techniques: current status and challenges</article-title>. <source>Signal Process: Image Commun</source>. <year>2008</year>;<volume>23</volume>(<issue>3</issue>):<comment>230&#x2013;47</comment>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>F</given-names></string-name>, <string-name><surname>Koenig</surname> <given-names>H</given-names></string-name></person-group>. <article-title>A survey of video encryption algorithms</article-title>. <source>Comput Secur</source>. <year>2010</year>;<volume>29</volume>(<issue>1</issue>):<fpage>315</fpage>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Mousavi</surname> <given-names>A</given-names></string-name>, <string-name><surname>Baraniuk</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Learning to invert: signal recovery via deep convolutional networks</article-title>. <conf-name>Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)</conf-name>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Unser</surname> <given-names>M</given-names></string-name>, <string-name><surname>Van De Ville</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Wavelet steerability and the higher-order Riesz transform</article-title>. <source>IEEE Trans Image Process</source>. <year>2010</year>;<volume>19</volume>(<issue>3</issue>):<comment>636&#x2013;52</comment>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Vorontsov</surname> <given-names>A</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>X</given-names></string-name>, <string-name><surname>Burda</surname> <given-names>M</given-names></string-name>, <string-name><surname>Turner</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Orthogonality constraints in neural networks through Lie algebra parametrization</article-title>. <conf-name>Proceedings of the AAAI Conference on Artificial Intelligence</conf-name>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hashem</surname> <given-names>MI</given-names></string-name>, <string-name><surname>Kuban</surname> <given-names>KH</given-names></string-name></person-group>. <article-title>Key generation method from fingerprint image based on deep convolutional neural network model</article-title>. <source>Nexo Revista Cient&#x00ED;fica</source>. <year>2023</year>;<volume>36</volume>(<issue>6</issue>):<comment>906&#x2013;25</comment>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kuznetsov</surname> <given-names>O</given-names></string-name>, <string-name><surname>Zakharov</surname> <given-names>D</given-names></string-name>, <string-name><surname>Frontoni</surname> <given-names>E</given-names></string-name></person-group>. <article-title>Deep learning-based biometric cryptographic key generation with post-quantum security</article-title>. <source>Multimed Tools Appl</source>. <year>2024</year>;<volume>83</volume>(<issue>19</issue>):<comment>56909&#x2013;38</comment>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yirga</surname> <given-names>TG</given-names></string-name>, <string-name><surname>Yirga</surname> <given-names>HG</given-names></string-name>, <string-name><surname>Addisu</surname> <given-names>EG</given-names></string-name></person-group>. <article-title>Cryptographic key generation using deep learning with biometric face and finger vein data</article-title>. <source>Front Artif Intell</source>. <year>2025</year>;<volume>8</volume>:<fpage>1543268</fpage>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Erkan</surname> <given-names>U</given-names></string-name>, <string-name><surname>Toktas</surname> <given-names>A</given-names></string-name>, <string-name><surname>Engino&#x011F;lu</surname> <given-names>S</given-names></string-name>, <string-name><surname>Akbacak</surname> <given-names>E</given-names></string-name>, <string-name><surname>Thanh</surname> <given-names>DNH</given-names></string-name></person-group>. <article-title>An image encryption scheme based on chaotic logarithmic map and key generation using deep CNN</article-title>. <source>Multimed Tools Appl</source>. <year>2022</year>;<volume>81</volume>: <comment>7365&#x2013;91</comment>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Shao</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Li</surname> <given-names>B</given-names></string-name>, <string-name><surname>Fu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Shang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Color image encryption based on discrete trinion Fourier transform and compressive sensing</article-title>. <source>Multimed Tools Appl</source>. <year>2024</year>;<volume>83</volume>(<issue>26</issue>):<fpage>67701&#x2013;22</fpage>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="web"><person-group person-group-type="author"><collab>Video Test Media.</collab></person-group> <article-title>YUV video sequences dataset</article-title>. <year>2019</year>. [Online]. Available from: <ext-link ext-link-type="uri" xlink:href="https://media.xiph.org/video/derf/">&#x00A0;https://media.xiph.org/video/derf/</ext-link>. <comment>[Accessed: Mar 30, 2026]</comment>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Jakubovitz</surname> <given-names>D</given-names></string-name>, <string-name><surname>Giryes</surname> <given-names>R</given-names></string-name></person-group>. <source>Improving DNN Robustness to Adversarial Attacks Using Jacobian Regularization</source>. <publisher-name>Tel Aviv University, Tech. Rep.</publisher-name>; <year>2018</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Lizama-P&#x00E9;rez</surname> <given-names>LA</given-names></string-name></person-group>. <source>XOR Chain and Perfect Secrecy at the Dawn of the Quantum Era</source>. <publisher-name>Universidad T&#x00E9;cnica Federico Santa Mar&#x00ED;a, Tech. Rep.</publisher-name>; <year>2019</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Schneier</surname> <given-names>B</given-names></string-name></person-group>. <source>Applied Cryptography: Protocols, Algorithms, and Source Code in C</source>. <edition>2nd ed</edition>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Wiley</publisher-name>; <year>2015</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Menezes</surname> <given-names>A</given-names></string-name>, <string-name><surname>Van Oorschot</surname> <given-names>P</given-names></string-name>, <string-name><surname>Vanstone</surname> <given-names>S</given-names></string-name></person-group>. <source>Handbook of Applied Cryptography</source>. <publisher-loc>Boca Raton, FL, USA</publisher-loc>: <publisher-name>CRC Press</publisher-name>; <year>1996</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Stallings</surname> <given-names>W</given-names></string-name></person-group>. <source>Cryptography and Network Security: Principles and Practice</source>. <publisher-name>Pearson</publisher-name>; <year>2017</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><collab>National Institute of Standards and Technology (NIST)</collab></person-group>. <source>Security Requirements for Cryptographic Modules</source>. <publisher-name>FIPS PUB</publisher-name>; <year>2001</year>. p.<comment>140&#x2013;2</comment>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Contreras-Rodr&#x00ED;guez</surname> <given-names>L</given-names></string-name>, <string-name><surname>Madarro-Cap&#x00F3;</surname> <given-names>EJ</given-names></string-name>, <string-name><surname>Contreras-Rodr&#x00ED;guez</surname> <given-names>L</given-names></string-name>, <string-name><surname>Leg&#x00F3;n-P&#x00E9;rez</surname> <given-names>CM</given-names></string-name>, <string-name><surname>Rojas</surname> <given-names>O</given-names></string-name>, <string-name><surname>Sosa-G&#x00F3;mez</surname> <given-names>G</given-names></string-name></person-group>. <article-title>Selecting an effective entropy estimator for short sequences of bits and bytes with maximum entropy</article-title>. <source>Entropy</source>. <year>2021</year>;<volume>23</volume>:<fpage>561</fpage>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Matsui</surname> <given-names>M</given-names></string-name></person-group>. <chapter-title>Linear cryptanalysis method for DES cipher</chapter-title>. In <source>Advances in Cryptology&#x2013;EUROCRYPT &#x2019;93, LNCS 765</source>. <publisher-name>Springer</publisher-name>, <year>1994</year>. pp. <comment>386&#x2013;97</comment>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Biham</surname> <given-names>E</given-names></string-name>, <string-name><surname>Shamir</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Differential cryptanalysis of DES-like cryptosystems</article-title>. <source>J Cryptol</source>. <year>1991</year>;<volume>4</volume>(<issue>1</issue>):<fpage>3</fpage>&#x2013;<lpage>72</lpage>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Daemen</surname> <given-names>J</given-names></string-name>, <string-name><surname>Rijmen</surname> <given-names>V</given-names></string-name></person-group>. <source>The Design of Rijndael: AES&#x2014;The Advanced Encryption Standard</source>. <publisher-name>Springer</publisher-name>; <year>2002</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Shannon</surname> <given-names>CE</given-names></string-name></person-group>. <article-title>Communication theory of secrecy systems</article-title>. <source>Bell Syst Tech J</source>. <year>1949</year>;<volume>28</volume>(<issue>4</issue>):<fpage>656</fpage>&#x2013;<lpage>715</lpage>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>H</given-names></string-name></person-group>. <chapter-title>How to break MD5 and other hash functions</chapter-title>. In <source>Advances in Cryptology&#x2013;EUROCRYPT 2005</source>. <publisher-name>Springer</publisher-name>, <year>2005</year>. pp. <fpage>19</fpage>&#x2013;<lpage>35</lpage>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>C</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>D</given-names></string-name>, <string-name><surname>Lo</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Cryptanalysis of an image encryption scheme based on a compound chaotic sequence</article-title>. <source>Signal Process: Image Commun</source>. <year>2017</year>;<volume>52</volume>:<comment>130&#x2013;9</comment>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Noonan</surname> <given-names>JP</given-names></string-name>, <string-name><surname>Agaian</surname> <given-names>S</given-names></string-name></person-group>. <article-title>NPCR and UACI randomness tests for image encryption</article-title>. <source>Cyber J: Multidiscip J Sci Technol</source>. <year>2011</year>;<volume>1</volume>(<issue>2</issue>):<comment>31&#x2013;8</comment>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>G</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chui</surname> <given-names>CK</given-names></string-name></person-group>. <article-title>A symmetric image encryption scheme based on 3D chaotic cat maps</article-title>. <source>Chaos Solitons Fractals</source>. <year>2004</year>;<volume>21</volume>(<issue>3</issue>):<comment>749&#x2013;61</comment>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Color image encryption using spatial chaotic systems</article-title>. <source>Signal Process</source>. <year>2012</year>;<volume>92</volume>(<issue>12</issue>):<comment>3492&#x2013;501</comment>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ghouate</surname> <given-names>NE</given-names></string-name></person-group>. <article-title>A high-entropy image encryption scheme using optimized chaotic maps</article-title>. <source>Sci Rep</source>. <year>2025</year>;<volume>15</volume>(<issue>1</issue>):<fpage>14784</fpage>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Alexan</surname> <given-names>W</given-names></string-name></person-group>. <article-title>A secure and efficient image encryption scheme based on a 5D hyperchaotic system</article-title>. <source>Sci Rep</source>. <year>2025</year>;<volume>15</volume>(<issue>1</issue>):<fpage>15794</fpage>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gao</surname> <given-names>S</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Iu</surname> <given-names>HHC</given-names></string-name>, <string-name><surname>Erkan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>R</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Development of a video encryption algorithm for critical areas using 2D extended Schaffer function map and neural networks</article-title>. <source>Signal Process: Image Commun</source>. <year>2024</year>;<volume>117</volume>:<fpage>103227</fpage>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kanungo</surname> <given-names>A</given-names></string-name>, <string-name><surname>Srivastava</surname> <given-names>A</given-names></string-name>, <string-name><surname>Anklesaria</surname> <given-names>S</given-names></string-name>, <string-name><surname>Churi</surname> <given-names>P</given-names></string-name></person-group>. <article-title>A systematic review on video encryption algorithms: a future research</article-title>. <source>J Auton Intell</source>. <year>2023</year>;<volume>6</volume>(<issue>2</issue>):<fpage>1&#x2013;12</fpage>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Salama</surname> <given-names>WM</given-names></string-name>, <string-name><surname>Aly</surname> <given-names>MH</given-names></string-name></person-group>. <source>Chaotic Maps Based Video Encryption: A New Approach</source>. <publisher-name>Pharos University/AASTMT</publisher-name>; <year>2020</year>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Elkamchouchi</surname> <given-names>H</given-names></string-name>, <string-name><surname>Salama</surname> <given-names>WM</given-names></string-name>, <string-name><surname>Abouelseoud</surname> <given-names>Y</given-names></string-name></person-group>. <source>New Video Encryption Schemes Based on Chaotic Maps</source>. <publisher-name>IET Image Processing</publisher-name>; <year>2020</year>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>A multi-level secure video encryption framework integrating scalable video coding with joint source-channel cryptography</article-title>. <conf-name>Proceedings of the CONF-MPCS Symposium</conf-name>, <year>2025</year>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Goyal</surname> <given-names>D</given-names></string-name>, <string-name><surname>Hemrajani</surname> <given-names>N</given-names></string-name></person-group>. <article-title>Novel selective video encryption for H.264 video</article-title>. <source>Int J Inform Secur Sci</source>. <year>2014</year>;<volume>3</volume>(<issue>4</issue>):<fpage>5161</fpage>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hosny</surname> <given-names>KM</given-names></string-name>, <string-name><surname>Zaki</surname> <given-names>MA</given-names></string-name>, <string-name><surname>Lashin</surname> <given-names>NA</given-names></string-name>, <string-name><surname>Hamza</surname> <given-names>HM</given-names></string-name></person-group>. <article-title>Fast colored video encryption using block scrambling and multi-key generation</article-title>. <source>Vis Comput</source>. <year>2023</year>;<volume>39</volume>(<issue>12</issue>):<fpage>6041&#x2013;72</fpage>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cheng</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Ao</surname> <given-names>N</given-names></string-name>, <string-name><surname>Han</surname> <given-names>Q</given-names></string-name></person-group>. <article-title>A selective video encryption scheme based on coding characteristics</article-title>. <source>Symmetry</source>. <year>2020</year>;<volume>12</volume>(<issue>3</issue>):<fpage>332</fpage>.</mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Das</surname> <given-names>S</given-names></string-name>, <string-name><surname>Jagan</surname> <given-names>L</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>GK</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>S</given-names></string-name>, <string-name><surname>Rout</surname> <given-names>J</given-names></string-name>, <string-name><surname>Soni</surname> <given-names>A</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Multilayered digital image encryption approach to resist cryptographic attacks for cybersecurity</article-title>. <source>PeerJ Comput Sci</source>. <year>2025</year>;<volume>11</volume>:<fpage>e3260</fpage>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Castro</surname> <given-names>JCH</given-names></string-name>, <string-name><surname>Sierra</surname> <given-names>JM</given-names></string-name>, <string-name><surname>Seznec</surname> <given-names>A</given-names></string-name>, <string-name><surname>Izquierdo</surname> <given-names>A</given-names></string-name>, <string-name><surname>Ribagorda</surname> <given-names>A</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>The strict avalanche criterion randomness test</article-title>. <source>Math Comput Simul</source>. <year>2005</year>;<volume>68</volume>(<issue>1</issue>):<fpage>17</fpage>.</mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Panwar</surname> <given-names>K</given-names></string-name>, <string-name><surname>Kukreja</surname> <given-names>S</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>A</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>KK</given-names></string-name></person-group>. <article-title>Towards deep learning for efficient image encryption</article-title>. <source>Procedia Comput Sci</source>. <year>2023</year>;<volume>218</volume>:<fpage>644&#x2013;50</fpage>.</mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Fu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>X</given-names></string-name>, <string-name><surname>Teng</surname> <given-names>L</given-names></string-name></person-group>. <article-title>A new chaos-based image encryption algorithm based on discrete Fourier transform and improved Joseph traversal</article-title>. <source>Mathematics</source>. <year>2024</year>;<volume>12</volume>(<issue>5</issue>):<fpage>638</fpage>.</mixed-citation></ref>
<ref id="ref-44"><label>[44]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Xu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Tong</surname> <given-names>XJ</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Peng</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system</article-title>. <source>J Opt Soc Am A</source>. <year>2016</year>;<volume>33</volume>(<issue>6</issue>):<comment>1166&#x2013;74</comment>.</mixed-citation></ref>
</ref-list>
</back></article>

