* |
*n*x | Race-Condition-Robust Hardware-Software Equivalence in *n*x |
a |
abstraction | Concurrency and Models of Abstraction: Past, Present and Future |
anomaly detection | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
c |
CNN | Evaluation of FPGA Acceleration of Neural Networks |
concurrency | Concurrency and Models of Abstraction: Past, Present and Future |
CSP | Slurm Scheduling From Rules-Based Systems Concurrency and Models of Abstraction: Past, Present and Future |
cuBLAS (Cuda Basic Linear Algebra Subprograms) | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
CUDA API | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
cuFFT (Cuda Fast Fourier Transforms) | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
CULA (Cuda Linear Algebra) | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
Cyber Terrorism | Concurrency and Models of Abstraction: Past, Present and Future |
d |
Data Lake | Concurrency and Models of Abstraction: Past, Present and Future |
data provenance report | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
Discrete Fourier Transform (DFT) | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
e |
Electrical Engineering | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
f |
fanotify | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
FPGA | Evaluation of FPGA Acceleration of Neural Networks |
frequency domain analysis | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
g |
GPU computing | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
h |
hardware-software equivalence | Race-Condition-Robust Hardware-Software Equivalence in *n*x |
HLS | Evaluation of FPGA Acceleration of Neural Networks |
i |
inotify | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
l |
Linux | Race-Condition-Robust Hardware-Software Equivalence in *n*x |
m |
Map Reduce | Concurrency and Models of Abstraction: Past, Present and Future |
meow | Slurm Scheduling From Rules-Based Systems Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
Microservices | Concurrency and Models of Abstraction: Past, Present and Future |
p |
parallel computing | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
perf | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
performance optimization | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
q |
quantum computing | Concurrency and Models of Abstraction: Past, Present and Future |
r |
race condition | Race-Condition-Robust Hardware-Software Equivalence in *n*x |
Radio Signal Detection | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
Rules-based | Slurm Scheduling From Rules-Based Systems |
s |
signal processing | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
Slurm | Slurm Scheduling From Rules-Based Systems |
SME | Evaluation of FPGA Acceleration of Neural Networks |
SSH | Race-Condition-Robust Hardware-Software Equivalence in *n*x |
strace | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |
t |
triangulation | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
w |
wireless communication | Accelerating Scientific and Engineering Applications through Cloud-based GPU Computing |
Workflow Manager | Is it feasible to identify outputs of an arbitrary process at run time without excessively slowing down workflows? |