Red de Bibliotecas Virtuales de Ciencias Sociales en
América Latina y el Caribe

Por favor, use este identificador para citar o enlazar este ítem:
https://biblioteca-repositorio.clacso.edu.ar/handle/CLACSO/181194
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.creator | Aguirre Sala, Jorge Francisco | - |
dc.date | 2022-05-09 | - |
dc.date.accessioned | 2023-03-15T20:34:27Z | - |
dc.date.available | 2023-03-15T20:34:27Z | - |
dc.identifier | https://revistas.ucm.es/index.php/TEKN/article/view/79692 | - |
dc.identifier | 10.5209/tekn.79692 | - |
dc.identifier.uri | https://biblioteca-repositorio.clacso.edu.ar/handle/CLACSO/181194 | - |
dc.description | In seeking to specifying algorithmic responsibility, the aim is to classify protective actions against the impact of artificial intelligence. The article provides a description of the problems caused by artificial intelligence, as well as a review of evaluation models and their components in order to guide best practice and methods in the specification of the algorithmic footprint. The analysis of four evaluation models shows that the best models are those related to risk and legal responsibility. Good evaluation practices endeavor to obtain quantitative expressions of qualitative aspects, while the conclusions warn of difficulties in building standardized formulas. The metrics of quantitative expressions must consider weights, based on the number of areas affected, and establish the severity in four levels of impact, risk or damage. This permits the reciprocity of four protective actions: the prohibition of some systems, ensuring damage repair, promoting impact mitigation, and establishing risk prevention. | en-US |
dc.description | Especificar la responsabilidad algorítmica tiene por objetivo clasificar las acciones de protección ante los impactos de la Inteligencia Artificial. La descripción de los problemas causados por la Inteligencia Artificial, aunada a la revisión de los modelos y componentes de las evaluaciones, permiten discernir sobre las buenas prácticas y métodos para establecer la huella algorítmica y las respectivas consecuencias. Se enumeran los seis inconvenientes causados por la Inteligencia Artificial, enfatizando las violaciones a los derechos fundamentales y las carencias de las autoridades para garantizar las normativas aplicables. El análisis de cuatro modelos de evaluación muestra la conveniencia de enfocarse en el riesgo. Se analizan los componentes y elementos deseables en todas las evaluaciones de impacto algorítmico desde la literatura atingente de los años 2020 y 2021. Se recogen las lecciones de las buenas prácticas de evaluación para demostrar que: las discusiones sugieren transitar hacia expresiones cuantitativas de los aspectos cualitativos, mientras las conclusiones advierten dificultades para construir una fórmula estandarizada de Evaluación. Se propone que las métricas procedan por ponderaciones o valores factoriales, según el número de ámbitos o dominios afectados y la gravedad se establezca en cuatro niveles de impacto, riesgo o daño. En simetría se plantean cuatro acciones de protección: prohibir algunos sistemas de Inteligencia Artificial, asegurar la reparación de daños causados por decisiones tomadas con algoritmos, promover la mitigación de impactos indeseables y establecer la prevención de riesgos. | es-ES |
dc.description | A especificação da responsabilidade algorítmica visa classificar as acções de protecção contra os impactos da Inteligência Artificial. A descrição dos problemas causados pela Inteligência Artificial, juntamente com a revisão dos modelos e componentes das avaliações, permitem discernir boas práticas e métodos para estabelecer a pegada algorítmica e as respectivas consequências. Os seis inconvenientes causados pela Inteligência Artificial são enumerados, salientando as violações dos direitos fundamentais e as deficiências das autoridades em garantir a regulamentação aplicável. A análise de quatro modelos de avaliação mostra a conveniência de se concentrar no risco. Analisa os componentes e elementos desejáveis em todas as avaliações de impacto algorítmicas da literatura relevante para os anos 2020 e 2021. São extraídas lições de boas práticas de avaliação para mostrar que: as discussões sugerem que se avança para expressões quantitativas de aspectos qualitativos, enquanto as conclusões alertam para as dificuldades na construção de uma fórmula de avaliação normalizada. Propõe-se que a métrica proceda por ponderações ou valores factoriais, de acordo com o número de áreas ou domínios afectados e a severidade seja estabelecida em quatro níveis de impacto, risco ou dano. Em simetria, são propostas quatro acções de protecção: proibir alguns sistemas de Inteligência Artificial, assegurar a reparação de danos causados por decisões tomadas com algoritmos, promover a mitigação de impactos indesejáveis, e estabelecer a prevenção de riscos. | pt-BR |
dc.format | application/pdf | - |
dc.language | spa | - |
dc.publisher | Grupo de Investigación Cultura Digital y Movimientos Sociales. Cibersomosaguas | es-ES |
dc.relation | https://revistas.ucm.es/index.php/TEKN/article/view/79692/4564456560597 | - |
dc.relation | /*ref*/Ada Lovelace Institute & AI Now Institute and Open Government Partnership. (2021). Algorithmic Accountability for the Public Sector. https://www.opengovpartnership.org/wp-content/uploads/2021/08/algorithmic-accountability-public-sector.pdf | - |
dc.relation | /*ref*/Aiken, C. (2021). Classifying AI Systems. Georgetown Universtity´s Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/classifying-ai-systems/ | - |
dc.relation | /*ref*/Andrade, N. y Kontschieder, V. (2021). AI Impact Assessment: A policy prototyping experiment. Open Loop. https://ssrn.com/abstract=3772500; http://dx.doi.org/10.2139/ssrn.3772500 | - |
dc.relation | /*ref*/Black, J. (Autumn 2005). The emergence of risk-based regulation and the new public management in the United Kingdom. Public Law, 512-549 http://eprints.lse.ac.uk/id/eprint/15809 | - |
dc.relation | /*ref*/Black, J. (2010). The role of risk in regulatory processes. En R. Baldwin, M. Cave, M. Lodge (Eds), The Oxford Handbook of Regulation (pp. 302-348). Oxford University Press. | - |
dc.relation | /*ref*/Bundesministerium für Arbeit und Soziales (2021). Observatorium Künstliche Intelligenz in Arbeit und Gesellschaft. https://www.ki-observatorium.de/ | - |
dc.relation | /*ref*/Castelló, C. (2021, 23 de julio). España, campo de pruebas europeo para la inteligencia artificial. Cinco Días. El País, Economía. https://cincodias.elpais.com/cincodias/2021/07/22/companias/1626964806_533819.html | - |
dc.relation | /*ref*/Chesterman, S. (2021). We, the robots? Regulating artificial intelligence and the limits of the law. Cambridge University Press. | - |
dc.relation | /*ref*/Coraggio, G. and Zappaterra, G. (2018). The risk-based approach to privacy: Risk or protection for business? Journal of Data Protection & Privacy, 1(4), 339-344. https://doi.org/10.1080/13669877.2018.1517381 | - |
dc.relation | /*ref*/Dalli, H. (2021). Artificial intelligence act. European Parliament, European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/694212/EPRS_BRI(2021)694212_EN.pdf | - |
dc.relation | /*ref*/Del Río, M. (2021, 5 de octubre). China publica código ético para regular la inteligencia artificial, ¿qué diría Isaac Asimov? GreenEntrepreneur. https://www.greenentrepreneur.com/article/389444 | - |
dc.relation | /*ref*/De Moya J-F. y Pallud, J. (2020). From panopticon to heautopticon: A new form of surveillance introduced by quantified-self practices. Information System Journal, 30, 940–976. https://doi.org/10.1111/isj.12284 | - |
dc.relation | /*ref*/Diakopoulos, N. y Friedler, S. (2016). How to hold algorithms accountable. MIT Technology Review. https://www. technologyreview.com/2016/11/17/155957/how-to-hold-algorithms-accountable/ | - |
dc.relation | /*ref*/Edwards, L. y Veale, M. (2017). Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. Duke Law & Technology Review 16(1), 18-84. https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1315&context=dltr https://doi.org/10.2139/ssrn.2972855 | - |
dc.relation | /*ref*/Entreprenur (2021, 26 agosto). Usuarios del robot XiaoIce han terminado en terapia por enamorarse de su inteligencia artificial. Entreprenur. https://www.greenentrepreneur.com/article/381951 | - |
dc.relation | /*ref*/European Union, Agency for Fundamental Rights (2020). Getting the future right. Artificial intelligence and fundamental rights. Publications Office of the European Union. https://doi.org/10.2811/774118 | - |
dc.relation | /*ref*/European Union (2021). Tool #12. Format of the IA report https://ec.europa.eu/info/sites/default/files/file_import/better-regulation-toolbox-12_en_0.pdf | - |
dc.relation | /*ref*/European Union, European Commission (2021b). Regulatory scrutiny board opinion. Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. EC https://www.eu.dk/samling/20211/kommissionsforslag/kom(2021)0206/forslag/1773317/2379083.pdf | - |
dc.relation | /*ref*/European Union, European Commission (2021d). Commission staff working document impact assessment. Annexes. EC SWD/2021/84 final Part 2/2. EUR-Lex - 52021SC0084 - ES - EUR-Lex (europa.eu) | - |
dc.relation | /*ref*/Gobierno de México (2018). Principios y guía de análisis de impacto para el desarrollo y uso de sistemas basados en inteligencia artificial en la administración pública federal. Secretaría de la Función Pública. https://www.gob.mx/cms/uploads/attachment/file/415644/Consolidado_Comentarios_Consulta_IA__1_.pdf | - |
dc.relation | /*ref*/Golbin, I. (2021, 28 octubre). Algorithmic impact assessments: What are they and why do you need them? PricewaterhouseCoopers US. https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-impact-assessments.html | - |
dc.relation | /*ref*/Gonçalves, M. (2019). The risk-based approach under the new EU data protection regulation: a critical perspective. Journal of Risk Research 23(3), 1-14 | - |
dc.relation | /*ref*/Government of Canada (2021). Algorithmic impact assessment tool https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html | - |
dc.relation | /*ref*/Gov.U.K. (2021). The roadmap to an effective AI assurance ecosystem. https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-ecosystem#the-roadmap-to-an-effective-ai-assurance-ecosystem | - |
dc.relation | /*ref*/Hartmann, K. y Wenzelburger, G. (2021). Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the USA. Policy Sciences, 54, 269–287. https://doi.org/10.1007/s11077-020-09414-y | - |
dc.relation | /*ref*/Henz, P. (2021). Ethical and legal responsibility for artificial intelligence. Discovery Artificial Intellegence 1, 2 https://doi.org/10.1007/s44163-021-00002-4 | - |
dc.relation | /*ref*/Jiménez, A. y Rendueles, C. (2020). Capitalismo digital: fragilidad social, explo-tación y solucionismo tecnológico. Teknokultura. Revista de Cultura Digital y Movimientos Sociales, 17(2), 95-101. https://dx.doi.org/10.5209/TEKN.70378 | - |
dc.relation | /*ref*/Kaminski, M. y Malgieri, G. (2021). Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law, 11(2), 125-144. https://doi.org/10.1093/idpl/ipaa020 | - |
dc.relation | /*ref*/Lean, P. (2019). The extension of legal personhood in artificial intelligence. Revista de Bioética y Derecho¸ 46, 47-66. https://scielo.isciii.es/scielo.php?pid=S1886-58872019000200004&script=sci_abstract&tlng=en | - |
dc.relation | /*ref*/Levy, D. (2007). Love and sex with robots: The evolution of human-robot relationships. HarperCollins Publishers. | - |
dc.relation | /*ref*/Macenaite, M. (2017). The “riskification” of European data protection law through a two-fold Shift. European Journal of Risk Regulation, 8(3), 506-540.https://doi.org/10.1017/err.2017.40 | - |
dc.relation | /*ref*/Mateos-García, J. (17 mayo, 2017). To err is algorithm: Algorithmic fallibility and economic organization. Nesta. https://www.nesta.org.uk/blog/to-err-is-algorithm-algorithmic-fallibility-and-economic-organisation/#_ednref12 | - |
dc.relation | /*ref*/Metcalf, J., et. al. (2021a). Algorithmic impact assessments and accountability: The co-construction of impacts. FAccT ’21, March 3–10, 2021, Virtual Event, Canada. https://doi.org/10.1145/3442188.3445935 | - |
dc.relation | /*ref*/Metcalf, J., et al. (2021b). Assembling accountability. Algorithmic impact assessment for the public interest. Data & Society Research Institute. https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf | - |
dc.relation | /*ref*/Muftic, N. (2021). Liability for artificial intelligence. En Z. Slakoper e I. Tot (Eds.), Digital Technologies and the Law of Obligations. (pp. 95-118) Routledge. https://doi.org/10.4324/9781003080596 | - |
dc.relation | /*ref*/NeuboxBlog (2021). Los 5 “robots” más populares que trabajan como influencers. https://neubox.com/blog/los-5-robots-mas-populares-que-trabajan-como-influencers/ | - |
dc.relation | /*ref*/Organización para la Cooperación y el Desarrollo Económicos (OECD.AI). (2019). OECD AI Policy Observatory. https://oecd.ai/en/dashboards/policy-initiatives/2019-data-policyInitiatives-24186 | - |
dc.relation | /*ref*/Organización para la Cooperación y el Desarrollo Económicos (OECD.AI). (2021). OECD AI Policy Observatory. https://oecd.ai/en/dashboards | - |
dc.relation | /*ref*/UNESCO (2021). Proyecto de texto de la recomendación sobre la ética de la inteligencia artificial. En Informe de la Comisión de Ciencias sociales y Humanas. https://unesdoc.unesco.org/ark:/48223/pf0000379920_spa | - |
dc.relation | /*ref*/Pascual, M. (2021, 22 de julio). El Gobierno prepara mecanismos para medir el impacto social de los algoritmos. El País, Tecnología. https://elpais.com/tecnologia/2021-07-22/el-gobierno-prepara-mecanismos-para-medir-el-impacto-social-de-los-algoritmos.html | - |
dc.relation | /*ref*/Ruckenstein, M. & Schüll, N. (2017). The datafication of health. Annual Review of Anthropology, 46(1), 261–278. https://doi.org/10.1146/annurev-anthro-102116-041244 | - |
dc.relation | /*ref*/Unión Europea, Comisión Europea (2020). Libro Blanco sobre la inteligencia artificial - un enfoque europeo orientado a la excelencia y la confianza. Unión Europea. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_es.pdf | - |
dc.relation | /*ref*/Unión Europea, Comisión Europea (2021a). Commission staff working document impact assessment. Accompanying the Proposal for a Regulation of the European Parliament and of the Council. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts Brussel: EC SWD/2021/84 final Part 1/2. EUR-Lex - 52021SC0084 - ES - EUR-Lex (europa.eu) | - |
dc.relation | /*ref*/Unión Europea, Comisión Europea (2021c). Propuesta de reglamento del Parlamento Europeo y del Consejo por el que se establecen normas armonizadas en materia de inteligencia artificial (ley de inteligencia artificial) y se modifican determinados actos legislativos de la unión. C.E. https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=CELEX:52021PC0206 | - |
dc.relation | /*ref*/Vercelli, A. (2021). El extractivismo de grandes datos (personales) y las tensiones jurídico-políticas y tecnológicas vinculadas al voto secreto. THĒMIS-Revista de Derecho 79, 111-125. https://doi.org/10.18800/themis.202101.006 | - |
dc.relation | /*ref*/Yeung, K. (2019). Responsibility and AI. A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Council of Europe. https://rm.coe.int/responsability-and-ai-en/168097d9c5 | - |
dc.rights | Derechos de autor 2022 Teknokultura. Revista de Cultura Digital y Movimientos Sociales | es-ES |
dc.source | Teknokultura. Journal of Digital Culture and Social Movements; Vol. 19 No. 2 (2022): Digital Education in the Time of COVID-19; 265-275 | en-US |
dc.source | Teknokultura. Revista de Cultura Digital y Movimientos Sociales; Vol. 19 Núm. 2 (2022): La educación digital en tiempo del COVID-19; 265-275 | es-ES |
dc.source | Teknokultura. Revista de Cultura Digital e Movimentos Sociais; v. 19 n. 2 (2022): La educación digital en tiempo del COVID-19; 265-275 | pt-BR |
dc.source | 1549-2230 | - |
dc.subject | algorithmic footprint models | en-US |
dc.subject | artificial intelligence | en-US |
dc.subject | impact evaluation | en-US |
dc.subject | protective actions | en-US |
dc.subject | acciones de protección | es-ES |
dc.subject | evaluación de impacto | es-ES |
dc.subject | inteligencia artificial | es-ES |
dc.subject | modelos de huella algorítmica | es-ES |
dc.subject | acções de protecção | pt-BR |
dc.subject | avaliação de impacto | pt-BR |
dc.subject | inteligência artificial | pt-BR |
dc.subject | modelação de rastro algorítmico | pt-BR |
dc.title | Specifying algorithmic responsibility | en-US |
dc.title | Especificando la responsabilidad algorítmica | es-ES |
dc.title | Especificação da responsabilidade algorítmica | pt-BR |
dc.type | info:eu-repo/semantics/article | - |
dc.type | info:eu-repo/semantics/publishedVersion | - |
Aparece en las colecciones: | Facultad de Ciencias Políticas y Sociología - UCM - Cosecha |
Ficheros en este ítem:
No hay ficheros asociados a este ítem.
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.