AsiaJCIS 2025: Papers with AbstractsPapers |
---|
Abstract. Range arguments are a type of zero-knowledge proofs that aim to prove that a prover's committed value falls within a specified range for a verifier. Previously, most range arguments were constructed based on the DLOG assumption, and hence, exponentiation operation is required for proof generation and verification. In addition, it is generally known that splitting a zero-knowledge proof protocol into a preprocessing phase and an online phase makes computation after fixing the input efficient. Still, such protocol has yet to be known for range arguments. This paper proposes an efficient range arguments protocol with a preprocessing phase. Our proposal takes a new approach by using arithmetic circuits to express the constraints that the prover must prove. The prover (resp. verifier) can generate (resp. verify) a part of proof based on multiplication and addition operations instead of exponentiation operations. Our range argument is a generic construction that does not rely on any particular mathematical assumptions, which enables us to construct a post-quantum range argument. The implementation evaluation shows that the total computation time for the prover and verifier in the online phase is efficient compared to Bulletproofs, one of the state-of-the-art range proofs. Especially, the prover computation is efficient. | Abstract. Blockchain technology enables secure transactions without requiring a centralized administrator used in various applications. Therefore, efficient verification in terms of computation and memory is desired. To reduce computational and memory costs, Agrawal et al. proposed Key-Value Commitments (KVC) with efficient data verification. KVC supports both a new pair insertions and value updates. However, their KVC outputs a key with each new pair insertion or value update, which connects transactions and leaks whether they belong to the same User. In this research, we define the feature that each transaction is independent of each other and not linked called unlinkability.
Furthermore, KVC has two other issues. One is that the proof consists of three group elements, which yield the computational cost of updating proofs. The other is that the sign of the value change is leaked during value updates.
This research constructs KVC that satisfies unlinkability by integrating Oblivious Accumulators (OblvAcc) into the KVC. The proposed method also resolves three issues. By processing update operations as insertions in our proposal, it reduces the computational cost for User by structuring proofs with only two elements, prevents sign leakage by outputting value changes as positive during updates. The key-binding security for the KVC in our proposal is reduced to the GRSA assumption under the random oracle model, and the SRSA assumption without random oracle. If the collision resistance, preimage resistance, and second preimage resistance of the hash function holds, KVC in our psoposal satisfies unlinkability. | Abstract. Gao et al. (IEEE Internet of Things Journal 2024) proposed public-key inverted-index keyword search with designated tester as an extension of public key encryption with keyword search (PEKS). In their scheme, a server (a tester) has a secret key and uses the key for running the search algorithm due to the designated tester setting. They proved that no information of keyword is revealed from trapdoors under the decisional Diffie-Hellman (DDH) assumption. However, they also employed a symmetric pairing which can be seen as a DDH-solver. Thus, it is expected that information of keyword is revealed from trapdoors since the underlying complexity assumption does not hold. In this paper, we demonstrate an attack against the Gao et al.'s scheme where information of keyword is revealed from a trapdoor. Our attack completes by using only the server's secret key in addition to the challenge trapdoor, without any additional encryption/trapdoor queries, and the attack complexity is just two pairing computations. We remark that an adversary is not allowed to obtain the server's secret key in their security model, and our attack is outside of their security model. Thus, we discuss the roles of the server, and stress that our attack scenario is reasonable. | Abstract. The Ring-LWE problem is a fundamental component of lattice-based cryptography, and evaluating its security is a crucial challenge. The algorithms for solving the Ring-LWE problem can be classified into four categories: lattice basis reduction algorithms, algebraic methods, combinatorial methods and exhaustive search algorithms. However, the combinatorial approach, the Ring-BKW algorithm, remains insufficiently analyzed. The Ring-BKW algorithm primarily consists of two steps, with the Reduction step being the bottleneck because many samples are required for decryption. In existing implementations of the Ring-BKW Reduction step, the block size remains fixed, preventing it from adapting to the sample reduction process and efficiently inducing collisions. In this study, we introduce a method that allows the block size in the Reduction step of the Ring-BKW algorithm to be variable. We propose two approaches: a static decision method, where users manually specify the block size for each reduction step, and a dynamic decision method, where the algorithm autonomously adjusts the block size. The proposed method increases the number of collisions compared to existing methods, resulting in approximately 55-fold and 425-fold more reduced samples for static and dynamic block size selection, respectively, in the Ring-LWE setting with q=17, n=2^4. | Abstract. We introduce HQCS-R, a novel Hamming-metric code-based signature scheme over Z_q. The security of the proposed scheme is based on the hardness of Hamming-metric restricted syndrome decoding problem for quasi cyclic codes, where the error vectors are restricted to a proper subset of Z_q^n. Assuming the hardness of this problem, we prove that HQCS-R is EUF-CMA secure in the classical random oracle model. Furthermore, we thoroughly analyze the security of the scheme, as well as compute a lower bound for the acceptance rate of signature generation. Based on these analyses, we present some concrete parameters for HQCS-R. In particular, for 128-bit security level, the public key and signature sizes of HQCS-R are 5888 bytes and 6265 bytes respectively. | Abstract. Most existing quantum secret sharing (QSS) protocols can be classified into two types of transmission structures: tree-type and circular-type. Compared to circular-type protocols, tree-type QSS protocols are more difficult to implement in practice due to photon energy loss over long distances, rendering them unsuitable when parties are widely separated. However, circular-type protocols still face two major challenges: (1) maintaining the energy of transmitted photons across a long chain of intermediate nodes is infeasible, and (2) reliance on relay transmission makes them vulnerable to Trojan horse attacks. To address these issues, we propose two novel circular-type QSS protocols based on SWAP gates. The proposed protocols are immune to Trojan horse attacks and offer a practical solution for relay transmission, enabling the long-distance distribution of shadow keys to each agent. | Abstract. With the development of Distributed Vehicular Fog Services (VFS), the demand for vehicle authentication in high-speed mobility and cross-domain scenarios has grown significantly. However, traditional authentication schemes exhibit significant limitations in privacy protection, low latency, and multi-domain collaborative authentication, particularly in high-speed scenarios where vehicles frequently switch domains. Additionally, existing solutions relying on centralized authentication architectures are vulnerable to single points of failure, further exacerbating security risks.To address these challenges, this paper proposes a Blockchain-Assisted Traceable Cross-Domain Anonymous Authentication Mechanism (BTCAA), aimed at providing secure and efficient authentication for high-speed moving vehicles accessing VFS. BTCAA designs a flexible authentication process that allows vehicles to dynamically adjust authentication procedures based on their driving routes and introduces anonymity to protect user privacy. The mechanism adopts a lightweight design to reduce authentication overhead while supporting identity traceability, ensuring the ability to verify vehicle identities in dispute scenarios without compromising anonymity. The decentralized architecture eliminates the risk of single points of failure, enhancing system security. Security analysis confirms that BTCAA effectively ensures privacy protection, identity traceability, message integrity, and confidentiality. Performance evaluations further demonstrate its high practicality and efficiency while maintaining robust security and privacy protection. | Abstract. Piccolo is a 64-bit block cipher proposed by Shibutani et al. in 2011, supporting 80-bit and 128-bit keys. In higher order differential cryptanalysis, computer experiments have verified that Piccolo has a 6-round characteristic using the 32-nd order differential, which was theoretically extended to a 7-round characteristic using the 48-th order differential. The integral attack is cryptanalysis, similar to a higher order differential cryptanalysis. An investigation of integral characteristics by establishing the Mixed Integer Linear Programming (MILP) model based on bit-based division property has revealed the existence of a 7-round integral characteristic using a 63-rd order differential, and a search using the Satisfiability Problem (SAT) has found a same-round integral characteristic using the 56-th order differential. In this paper, we introduce the integral property based on the frequency distribution, clarify the reason for the 5-round integral characteristic using the 16-th order differential we found by computer experiment, and derive the integral characteristics by applying the round extension.
Then, we show that if the property can be used in the attack equation, the key can be identified more efficiently than the conventional method. We also present the experimental results of a key recovery attack against the reduced-round Piccolo. | Abstract. With the development of the digital society, phishing attacks have become an increasingly serious cybersecurity threat, posing risks not only to general users but also serving as a common initial intrusion method in Advanced Persistent Threat (APT) attacks. In this study, we simulated a phishing attack targeting the Moodle system of National Taiwan Normal University and collected 104 valid survey responses to investigate phishing website recognition behaviors. The results indicate that checking the URL is one of the most effective methods for users to identify phishing websites. In the future, we plan to develop a browser extension integrated with Large Language Models (LLMs) to automatically detect high-risk phishing websites and provide real-time warnings to users, thereby enhancing overall protection capabilities. | Abstract. We introduce a novel Public Key Encryption with Equality Test supporting Flexible Authorization scheme offering User-Level, Ciphertext-Level, and User-Specific-Ciphertext-Level authorizations. Notably, our construction achieves security under the Decisional Diffie-Hellman assumption with a tight reduction, whereas the existing works are either not tightly secure or rely heavily on the random oracles. By relying solely on the standard DDH assumption, our scheme offers practical implementation without specialized cryptographic structures. | Abstract. Fast and efficient computation of the Number Theoretic Transform (NTT) is essential for accelerating cryptographic algorithms and large integer multiplications. However, conventional NTT implementations rely on fixed computation paths that fail to adapt to varying input distributions and hardware-specific conditions. In this paper, we propose a HybridNTT that is a hybrid neural network architecture that combines 1D convolutional layers with Transformer encoders to dynamically predict the optimal NTT execution path. The model is trained on a large-scale synthetic dataset comprising various input distributions—uniform, normal, sparse, sorted, bursty, and patterned—alongside multiple NTT parameter settings, including different moduli and transform sizes. Experimental results demonstrate that HybridNTT achieves classification accuracy over 92% and significantly reduces NTT execution time by an average of 34.7%, while maintaining low inference overhead (<5%). These findings highlight the feasibility of machine learning-based optimization for fundamental mathematical operations like the NTT. The proposed method can be seamlessly integrated into cryptographic libraries or hardware accelerators to enable adaptive, context-aware performance tuning, offering a practical step toward intelligent mathematical computing. | Abstract. Nominative signatures allow us to indicate who can verify a signature, and they can be employed to construct a non-transferable signature verification system that prevents the signature verification by a third party in unexpected situations. For example, this system can prevent IOU/loan certificate verification in unexpected situations. However, nominative signatures themselves do not allow the verifier to check whether the funds will be transferred in the future or have been transferred. It would be desirable to verify the fact simultaneously when the system involves a certain money transfer such as cryptocurrencies/cryptoassets. In this paper, we propose a smart contract-based non-transferable signature verification system using nominative signatures. We pay attention to the fact that the invisibility, which is a security requirement to be held for nominative signatures, allows us to publish nominative signatures on the blockchain. Our system can verify whether a money transfer actually will take place, in addition to indicating who can verify a signature. We evaluate the gas cost when a smart contract runs the verification algorithm of the Hanaoka-Schuldt nominative signature scheme (ACNS 2011, IEICE Trans. 2016). | Abstract. Attributed-based encryption (ABE) enables fine-grained access control over encrypted data. However, ABE requires a single trusted authority to issue decryption keys, which makes ABE have a key-escrow problem. If an adversary breaks through the system, then the adversary can decrypt all ciphertext encrypted through the system. In this work, we generalize the notion of registration-based encryption (RBE) to key-policy attributed-based encryption (KP-ABE). Through the introduction of RBE, users can autonomously generate their own keys, thereby effectively resolving the key-escrow problem of KP-ABE. | Abstract. In modern society, the use of personal data is advancing in many fields. However, such data utilization also increases the risk of privacy leakage. Therefore, Differential Privacy (DP) has been proposed as a measure of privacy protection. DP is a privacy-preserving measure when data collectors release data. However, since DP requires the trust of the data collector, Local Differential Privacy (LDP) was proposed as a privacy protection measure that does not rely on third-party trust. LDP assumes that data providers directly perturb their data, thereby protecting privacy leakage from personal data. LDP is useful in machine learning for data privacy and model privacy. However, a challenge with LDP is the difficulty in balancing privacy protection and utility when dealing with high-dimensional data. To address this, techniques such as dimensionality reduction and data discretization have been proposed. A machine learning framework called SUPM has been proposed to satisfy LDP. In SUPM, all attribute types, including categorical and numerical, are converted into ordered discrete sets with domain size L, performing uniform weak anonymization and applying perturbation uniformly. In this case, the domain and domain size L for each attribute must be predetermined regardless of the data characteristics. Therefore, it is necessary to know the characteristics, such as the utility, of each attribute in advance. This study proposes a an attribute domain reconstruction method that reduces domain size while preserving data utility using data collected during dimensionality reduction. The effectiveness of the proposed method is validated using two databases: ADULT and WDBC. | Abstract. Post-quantum cryptography (PQC) offers resistance against quantum adversaries. This study's practical implementations remain vulnerable to side-channel attacks (SCAs) that exploit timing, power, or electromagnetic leakage. In this study, we introduce an unsupervised, resource-efficient anomaly detection framework tailored to the unique constraints of post-quantum cryptography (PQC) systems. Unlike traditional methods that rely on labeled attack traces or algorithm-specific profiling, our approach leverages an autoencoder trained solely on benign traces to learn deep latent representations of normal cryptographic behavior. The system flags deviations using reconstruction error and supports multiple PQC schemes, including Kyber and Dilithium, without retraining. Experimental results demonstrate an average classification accuracy of 98.1%, with a false positive rate of 0.7% and a false negative rate of 0.4%. Under adversarial perturbation and Gaussian noise, the model maintains an AUC-ROC of 1.00, confirming its robustness. Additionally, ablation studies across CNN, GRU, and Transformer architectures validate the autoencoder’s superior trade-off between accuracy and latency, achieving an inference time of 0.036 ms and a model size of only 0.11 MB. This enables real-time deployment on constrained devices without sacrificing security. The proposed solution marks a step forward in scalable, adaptive post-quantum defenses and opens new directions for cryptographic anomaly detection with minimal overhead. This framework is deployable on real-world PQC-enabled IoT systems. | Abstract. This paper presents a comparative analysis of two lattice-based post-quantum digital signature schemes: FALCON and SOLMAE. FALCON which was finallty selected by NIST for PQC standardization, represents an efficient realization of the GPV framework over NTRU lattices. SOLMAE, inspired by FALCON, Mitaka, and Antrag, aims to improve implementation simplicity and performance while preserving strong security guarantees.
Adopting a pedagogical approach, we provide algorithmic insights into both schemes and conduct practical evaluations using their Python implementations, focusing on key generation, signing, and verification procedures. Performance comparisons at two NIST security levels (512 and 1024 bits) highlight SOLMAE’s potential advantages in simplicity and execution time, suggesting its suitability for deployment in resource-constrained environments. | Abstract. In recent years, the adoption rate of virtual reality (VR) technology has been on the rise, and the metaverse is attracting attention as a next-generation form of internet usage. VR offers a variety of applications and content, such as education, gaming, and tourism, where users can remain anonymous and behave as fictional characters. However, there is a possibility that individuals in the real world can be identified from motion data that records VR users’ head-and-hands movements in detail, represented in time-series data. Nair and Lieber demonstrated that publicly available replay data from the VR rhythm game "Beat Saber" can identify individuals with over 90% accuracy. Nevertheless, the features that had the greatest impact on identification accuracy were static attributes such as height and arm length. Therefore, in this study, we attempt to identify individuals in VR domain based on dynamic features such as users’ distinctive ways of moving their arms by employing the DTW (Dynamic Time Warping) distance derived from motion data recorded during VR experiences. | Abstract. As cyberattacks become increasingly sophisticated, organizations face an urgent need for timely and accurate incident response to reduce their impact on critical systems. Automating the analysis of network traffic logs has become essential for supporting security analysts and specialists. Although many previous studies have applied machine learning to address this task, they often encounter challenges such as dependence on large-scale analytics platforms, limited exploration of machine learning algorithms, and difficulties in deploying distributed systems due to high costs, complexity, and privacy concerns.
To tackle these limitations, we propose a lightweight and accurate machine learning-based framework for the automatic analysis of network traffic logs. Our approach transforms log data into feature vectors using a document-based feature representation method. Experimental results on benchmark datasets demonstrate that our method enables efficient and effective traffic log analysis suitable for practical deployment. |
|
|