Due to the increasing maturity of artificial intelligence (AI), several countries (e.g., the United States) have applied such technology to construct risk assessment system as a way to predict certain people’s risk of committing crimes and decide whether they should be detained or released. After thorough analysis, however, it is clear that the construction of risk assessment is not as just or objective as people might think. People’s subjective decisions and value judgment exist in every stage of the construction of system. In addition, due to the nature of AI, the decision-making processes of the risk assessment system is often unclear, meaning that the system lacks transparency. Therefore, we suggest that such risk assessment system must comply with the transparency requirement, and the use of system should be in line with the accountability requirement to ensure the system meets the set goal, the accuracy of the of its outcome, and people’s rights. Furthermore, the outcome of the risk assessment system should not be the determinative factor of the decisions in criminal procedure, and whether the function of the system is consistent with the purpose of the process should be verified. Finally, there should be effective mechanism to avoid automation complacency.