프로젝트에 적용한 소셜 로그인에서 중간에 에러가 발생하면서 로그인이 되지 않는 현상이 발생했다.
20분 전에 성공했던 소셜로그인이 갑자기 실패하는 상황.
프론트에서 소셜로그인 버튼을 클릭하면 이런 에러가 나온다.
스프링에서 에러 로그는 다음과 같이 나타났다.
2025-08-06T10:47:45.164Z ERROR 1 --- [becareful-server] [io-8080-exec-12] OAuth2AuthorizationRequestRedirectFilter : Authorization Request failed: org.springframework.data.redis.RedisSystemException: Error in execution
org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter$OAuth2AuthorizationRequestException: Error in execution
at org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter.doFilterInternal(OAuth2AuthorizationRequestRedirectFilter.java:193) ~[spring-security-oauth2-client-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) ~[spring-security-web-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:243) ~[spring-webmvc-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:238) ~[spring-security-config-6.4.2.jar!/:6.4.2]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:362) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:278) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.2.2.jar!/:6.2.2]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:397) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.34.jar!/:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) ~[tomcat-embed-core-10.1.34.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: org.springframework.data.redis.RedisSystemException: Error in execution
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:52) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:50) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:40) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:38) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceConnection.convertLettuceAccessException(LettuceConnection.java:310) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceConnection.await(LettuceConnection.java:1012) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceConnection.lambda$doInvoke$3(LettuceConnection.java:447) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceInvoker$Synchronizer.invoke(LettuceInvoker.java:673) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceInvoker$DefaultSingleInvocationSpec.getOrElse(LettuceInvoker.java:598) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceInvoker$SingleInvocationSpec.orElse(LettuceInvoker.java:385) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.lettuce.LettuceStringCommands.set(LettuceStringCommands.java:115) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.DefaultedRedisConnection.set(DefaultedRedisConnection.java:326) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.connection.DefaultStringRedisConnection.set(DefaultStringRedisConnection.java:770) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.DefaultValueOperations.lambda$set$15(DefaultValueOperations.java:208) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:411) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:378) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:117) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.DefaultValueOperations.set(DefaultValueOperations.java:208) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at org.springframework.data.redis.core.ValueOperations.set(ValueOperations.java:75) ~[spring-data-redis-3.4.2.jar!/:3.4.2]
at com.becareful.becarefulserver.global.security.CustomAuthorizationRequestResolver.customize(CustomAuthorizationRequestResolver.java:44) ~[!/:0.0.1-SNAPSHOT]
at com.becareful.becarefulserver.global.security.CustomAuthorizationRequestResolver.resolve(CustomAuthorizationRequestResolver.java:27) ~[!/:0.0.1-SNAPSHOT]
at org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter.doFilterInternal(OAuth2AuthorizationRequestRedirectFilter.java:186) ~[spring-security-oauth2-client-6.4.2.jar!/:6.4.2]
... 60 common frames omitted
Caused by: io.lettuce.core.RedisReadOnlyException: READONLY You can't write against a read only replica.
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:148) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:120) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:124) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:115) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:749) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:684) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:601) ~[lettuce-core-6.4.2.RELEASE.jar!/:6.4.2.RELEASE/f4dfb40]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1357) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:868) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[netty-transport-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.117.Final.jar!/:4.1.117.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.117.Final.jar!/:4.1.117.Final]
... 1 common frames omitted
에러가 매우 길지만, 핵심적인 문구가 보인다.
바로 Redis 가 READONLY 라는 것.
즉, 로그인에 성공해서 리프레시 토큰을 레디스에 저장하려고 했는데 READONLY 모드라서 쓰기에 실패햐여 전체 로그인 로직이 실패하고, 이로인해 스프링 시큐리티 내부적으로는 401 응답을 내보내도록 처리한 것이다.
전에도 몇 번 이래서 서버를 재부팅해서 해결하곤 했었는데, 그럼에도 불구하고 일정 시간 텀을 두고 이 문제는 계속 반복적으로 발생했다.
그래서 이번엔 완전히 해결해보겠다고 레디스 컨테이너의 로그를 읽어봤지만, 가장 최근 로그는 이 에러가 발생하기 며칠 전 로그가 마지막이었다.
그런데 로그와 레디스 설정값을 보면서 이상했던 점은 레디스를 설정할 때 복제본을 둔 적이 없다는 것이다. (standalone 모드로 실행시켰었다)
클러스터가 아니라 단일 마스터 노드 하나로 구성했기 때문에 다운이 되었다면 다운이 되었지, slave 노드가 존재하는 것은 이상했다.
package com.becareful.becarefulserver.global.config;
import com.becareful.becarefulserver.domain.auth.dto.response.OAuth2LoginResponse;
import com.becareful.becarefulserver.domain.auth.dto.response.RegisteredUserLoginResponse;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.repository.configuration.EnableRedisRepositories;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
@Configuration
@EnableRedisRepositories
public class RedisConfig {
@Value("${spring.data.redis.host}")
private String host;
@Value("${spring.data.redis.port}")
private String port;
@Bean
public LettuceConnectionFactory lettuceConnectionFactory() {
return new LettuceConnectionFactory(new RedisStandaloneConfiguration(host, Integer.parseInt(port)));
}
@Bean
public RedisTemplate<?, ?> redisTemplate(RedisConnectionFactory factory) {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(factory);
return template;
}
...
}
레디스 설정시에도 RedisStandaloneConfiguration 을 사용해서 LettuceConnectionFactory 빈을 생성한다.
https://sunghyun98.tistory.com/384?category=1171848
[트러블 슈팅] 외부에서의 redis 접근으로 인한 복제 노드로 변환되는 문제
문제 및 원인Caused by: io.lettuce.core.RedisReadOnlyException: READONLY You can't write against a read only replica. at io.lettuce.core.internal.ExceptionFactory.createExecutionException(ExceptionFactory.java:144)서버 에러가 발생하였습니다
sunghyun98.tistory.com
그런데 이 글을 보니 우리와 상황이 비슷했다.
복제를 사용하는 클러스터 모드도 아닌데 동일한 오류가 발생한 것이다.
다만 환경 차이가 있다면 이 글에서는 인스턴스에 직접 레디스를 설치한 것 같았다.
도커에서 실행중인 레디스의 로그는 파일이 아닌 도커 출력으로 남았고 로그는 다음과 같았다.
Starting Redis Server
1:C 23 Jul 2025 11:38:16.555 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:C 23 Jul 2025 11:38:16.555 # WARNING Your system is configured to use the 'xen' clocksource which might lead to degraded performance. Check the result of the [slow-clocksource] system check: run 'redis-server --check-system' to check if the system's clocksource isn't degrading performance.
1:C 23 Jul 2025 11:38:16.555 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 23 Jul 2025 11:38:16.555 * Redis version=8.0.3, bits=64, commit=00000000, modified=1, pid=1, just started
1:C 23 Jul 2025 11:38:16.555 * Configuration loaded
1:M 23 Jul 2025 11:38:16.556 * monotonic clock: POSIX clock_gettime
1:M 23 Jul 2025 11:38:16.561 * Running mode=standalone, port=6379.
1:M 23 Jul 2025 11:38:16.562 * <bf> RedisBloom version 8.0.1 (Git=unknown)
1:M 23 Jul 2025 11:38:16.562 * <bf> Registering configuration options: [
1:M 23 Jul 2025 11:38:16.562 * <bf> { bf-error-rate : 0.01 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { bf-initial-size : 100 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { bf-expansion-factor : 2 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { cf-bucket-size : 2 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { cf-initial-size : 1024 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { cf-max-iterations : 20 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { cf-expansion-factor : 1 }
1:M 23 Jul 2025 11:38:16.562 * <bf> { cf-max-expansions : 32 }
1:M 23 Jul 2025 11:38:16.562 * <bf> ]
1:M 23 Jul 2025 11:38:16.562 * Module 'bf' loaded from /usr/local/lib/redis/modules//redisbloom.so
1:M 23 Jul 2025 11:38:16.583 * <search> Redis version found by RedisSearch : 8.0.3 - oss
1:M 23 Jul 2025 11:38:16.583 * <search> RediSearch version 8.0.1 (Git=5688fcc)
1:M 23 Jul 2025 11:38:16.583 * <search> Low level api version 1 initialized successfully
1:M 23 Jul 2025 11:38:16.585 * <search> gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: return, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 1000000,
1:M 23 Jul 2025 11:38:16.586 * <search> Initialized thread pools!
1:M 23 Jul 2025 11:38:16.586 * <search> Disabled workers threadpool of size 0
1:M 23 Jul 2025 11:38:16.589 * <search> Subscribe to config changes
1:M 23 Jul 2025 11:38:16.589 * <search> Enabled role change notification
1:M 23 Jul 2025 11:38:16.591 * <search> Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms
1:M 23 Jul 2025 11:38:16.591 * <search> Register write commands
1:M 23 Jul 2025 11:38:16.591 * Module 'search' loaded from /usr/local/lib/redis/modules//redisearch.so
1:M 23 Jul 2025 11:38:16.594 * <timeseries> RedisTimeSeries version 80001, git_sha=577bfa8b5909e7ee572f0b651399be8303dc6641
1:M 23 Jul 2025 11:38:16.594 * <timeseries> Redis version found by RedisTimeSeries : 8.0.3 - oss
1:M 23 Jul 2025 11:38:16.594 * <timeseries> Registering configuration options: [
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-compaction-policy : }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-num-threads : 3 }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-retention-policy : 0 }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-duplicate-policy : block }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-chunk-size-bytes : 4096 }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-encoding : compressed }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-ignore-max-time-diff: 0 }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> { ts-ignore-max-val-diff : 0.000000 }
1:M 23 Jul 2025 11:38:16.594 * <timeseries> ]
1:M 23 Jul 2025 11:38:16.596 * <timeseries> Detected redis oss
1:M 23 Jul 2025 11:38:16.597 * Module 'timeseries' loaded from /usr/local/lib/redis/modules//redistimeseries.so
1:M 23 Jul 2025 11:38:16.609 * <ReJSON> Created new data type 'ReJSON-RL'
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> version: 80001 git sha: unknown branch: unknown
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Exported RedisJSON_V1 API
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Exported RedisJSON_V2 API
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Exported RedisJSON_V3 API
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Exported RedisJSON_V4 API
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Exported RedisJSON_V5 API
1:M 23 Jul 2025 11:38:16.612 * <ReJSON> Enabled diskless replication
1:M 23 Jul 2025 11:38:16.613 * <ReJSON> Initialized shared string cache, thread safe: false.
1:M 23 Jul 2025 11:38:16.613 * Module 'ReJSON' loaded from /usr/local/lib/redis/modules//rejson.so
1:M 23 Jul 2025 11:38:16.613 * <search> Acquired RedisJSON_V5 API
1:M 23 Jul 2025 11:38:16.613 * Server initialized
1:M 23 Jul 2025 11:38:16.613 * Ready to accept connections tcp
Starting Redis Server
1:C 27 Jul 2025 06:26:41.299 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:C 27 Jul 2025 06:26:41.300 # WARNING Your system is configured to use the 'xen' clocksource which might lead to degraded performance. Check the result of the [slow-clocksource] system check: run 'redis-server --check-system' to check if the system's clocksource isn't degrading performance.
1:C 27 Jul 2025 06:26:41.300 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 27 Jul 2025 06:26:41.300 * Redis version=8.0.3, bits=64, commit=00000000, modified=1, pid=1, just started
1:C 27 Jul 2025 06:26:41.300 * Configuration loaded
1:M 27 Jul 2025 06:26:41.302 * monotonic clock: POSIX clock_gettime
1:M 27 Jul 2025 06:26:41.306 * Running mode=standalone, port=6379.
1:M 27 Jul 2025 06:26:41.310 * <bf> RedisBloom version 8.0.1 (Git=unknown)
1:M 27 Jul 2025 06:26:41.310 * <bf> Registering configuration options: [
1:M 27 Jul 2025 06:26:41.310 * <bf> { bf-error-rate : 0.01 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { bf-initial-size : 100 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { bf-expansion-factor : 2 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { cf-bucket-size : 2 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { cf-initial-size : 1024 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { cf-max-iterations : 20 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { cf-expansion-factor : 1 }
1:M 27 Jul 2025 06:26:41.310 * <bf> { cf-max-expansions : 32 }
1:M 27 Jul 2025 06:26:41.310 * <bf> ]
1:M 27 Jul 2025 06:26:41.310 * Module 'bf' loaded from /usr/local/lib/redis/modules//redisbloom.so
1:M 27 Jul 2025 06:26:41.339 * <search> Redis version found by RedisSearch : 8.0.3 - oss
1:M 27 Jul 2025 06:26:41.340 * <search> RediSearch version 8.0.1 (Git=5688fcc)
1:M 27 Jul 2025 06:26:41.341 * <search> Low level api version 1 initialized successfully
1:M 27 Jul 2025 06:26:41.341 * <search> gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: return, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 1000000,
1:M 27 Jul 2025 06:26:41.343 * <search> Initialized thread pools!
1:M 27 Jul 2025 06:26:41.343 * <search> Disabled workers threadpool of size 0
1:M 27 Jul 2025 06:26:41.345 * <search> Subscribe to config changes
1:M 27 Jul 2025 06:26:41.345 * <search> Enabled role change notification
1:M 27 Jul 2025 06:26:41.345 * <search> Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms
1:M 27 Jul 2025 06:26:41.346 * <search> Register write commands
1:M 27 Jul 2025 06:26:41.346 * Module 'search' loaded from /usr/local/lib/redis/modules//redisearch.so
1:M 27 Jul 2025 06:26:41.350 * <timeseries> RedisTimeSeries version 80001, git_sha=577bfa8b5909e7ee572f0b651399be8303dc6641
1:M 27 Jul 2025 06:26:41.350 * <timeseries> Redis version found by RedisTimeSeries : 8.0.3 - oss
1:M 27 Jul 2025 06:26:41.351 * <timeseries> Registering configuration options: [
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-compaction-policy : }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-num-threads : 3 }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-retention-policy : 0 }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-duplicate-policy : block }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-chunk-size-bytes : 4096 }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-encoding : compressed }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-ignore-max-time-diff: 0 }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> { ts-ignore-max-val-diff : 0.000000 }
1:M 27 Jul 2025 06:26:41.351 * <timeseries> ]
1:M 27 Jul 2025 06:26:41.352 * <timeseries> Detected redis oss
1:M 27 Jul 2025 06:26:41.353 * Module 'timeseries' loaded from /usr/local/lib/redis/modules//redistimeseries.so
1:M 27 Jul 2025 06:26:41.368 * <ReJSON> Created new data type 'ReJSON-RL'
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> version: 80001 git sha: unknown branch: unknown
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Exported RedisJSON_V1 API
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Exported RedisJSON_V2 API
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Exported RedisJSON_V3 API
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Exported RedisJSON_V4 API
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Exported RedisJSON_V5 API
1:M 27 Jul 2025 06:26:41.371 * <ReJSON> Enabled diskless replication
1:M 27 Jul 2025 06:26:41.372 * <ReJSON> Initialized shared string cache, thread safe: false.
1:M 27 Jul 2025 06:26:41.372 * Module 'ReJSON' loaded from /usr/local/lib/redis/modules//rejson.so
1:M 27 Jul 2025 06:26:41.372 * <search> Acquired RedisJSON_V5 API
1:M 27 Jul 2025 06:26:41.372 * Server initialized
1:M 27 Jul 2025 06:26:41.374 * Ready to accept connections tcp
현재 로그를 보고 있는 날짜는 2025년 8월 6일, 레디스 컨테이너가 실행된지는 10일이 지났다.
8월 6일 기준으로 10일 전이면 7월 27일이므로, 위 로그대로라면 7월 27일 레디스가 시작한 이후, 별도의 로그는 남지 않았다.
즉, 외부에서 접근한 흔적도 없는 것이다..
다시 미궁에 빠졌다.....
게다가 스프링에서는 계속해서 replica 에 접속하고 있다는 에러를 내고 있지만, 레디스 컨테이너를 보면
/data # redis-cli -h 127.0.0.1 -p 6379 INFO replication
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:471054d8c3da9c59f5db525775a45c6e2e543b51
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
분명 역할이 master 이다.....
여기에서 뭔가 이상함을 느끼고 .env 설정을 읽어보던 중 무언가 이상한 점을 발견했다.
REDIS_HOST=ec2-xx-xx-xxx-xxx.ap-northeast-2.compute.amazonaws.com
바로 REDIS_HOST 가 ec2 호스트 값으로 쌩 코딩되어 있었던 것이다.
(로컬에서도, 서버에서도 같은 설정값을 쓰고 있었다)
그래서 내가 레디스 포트를 열어뒀었나? 싶어서 보안 규칙을 찾아보니 6379포트는 열려있지 않았다.
그러면 어떻게 애초부터 통신이 안될텐데 어떻게 접속은 성공했던건지 의문을 갖던 순간 깨달았다.
현재 서버는 원래 사용하던 EC2 서버에서 새로운 EC2 서버로 이전한 상태였는데, 혹시 몰라서 이전에 쓰던 Ec2 인스턴스는 계속 실행시켜둔 상태였다.
그리고 저 REDIS_HOST 는 이전에 사용하던 Ec2 였으며, 그 EC2 는 6379 포트가 보안규칙에서 열려있었다...
즉, 이전 Ec2 에서는 외부에서 레디스에 접속이 가능한 상태였던 것이다.
그래서 이전 Ec2 로 들어가서 로그를 보니
159:C 06 Aug 2025 05:02:02.049 * DB saved on disk
159:C 06 Aug 2025 05:02:02.049 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 06 Aug 2025 05:02:02.143 * Background saving terminated with success
1:M 06 Aug 2025 06:02:03.083 * 1 changes in 3600 seconds. Saving...
1:M 06 Aug 2025 06:02:03.084 * Background saving started by pid 160
160:C 06 Aug 2025 06:02:03.092 * DB saved on disk
160:C 06 Aug 2025 06:02:03.092 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 06 Aug 2025 06:02:03.184 * Background saving terminated with success
1:M 06 Aug 2025 08:24:35.175 # Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection from 3.132.23.201:38838 aborted.
1:M 06 Aug 2025 08:26:48.592 # Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection from 3.132.23.201:60132 aborted.
1:M 06 Aug 2025 10:14:04.788 * 1 changes in 3600 seconds. Saving...
1:M 06 Aug 2025 10:14:04.788 * Background saving started by pid 161
161:C 06 Aug 2025 10:14:04.795 * DB saved on disk
161:C 06 Aug 2025 10:14:04.796 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 06 Aug 2025 10:14:04.889 * Background saving terminated with success
1:M 06 Aug 2025 10:27:42.031 * Background saving started by pid 162
162:C 06 Aug 2025 10:27:42.040 * DB saved on disk
162:C 06 Aug 2025 10:27:42.040 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 06 Aug 2025 10:27:42.123 * Background saving terminated with success
1:S 06 Aug 2025 10:27:42.131 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
1:S 06 Aug 2025 10:27:42.131 * Connecting to MASTER 109.244.159.27:26858
1:S 06 Aug 2025 10:27:42.131 * MASTER <-> REPLICA sync started
1:S 06 Aug 2025 10:27:42.131 * REPLICAOF 109.244.159.27:26858 enabled (user request from 'id=1571 addr=109.244.159.27:37196 laddr=172.18.0.3:6379 fd=12 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=49 qbuf-free=20425 argv-mem=26 multi-mem=0 rbs=1024 rbp=636 obl=0 oll=0 omem=0 tot-mem=22450 events=r cmd=slaveof user=default redir=-1 resp=3 lib-name= lib-ver=')
1:S 06 Aug 2025 10:27:46.325 # Error condition on socket for SYNC: Connection refused
1:S 06 Aug 2025 10:27:46.558 * Connecting to MASTER 109.244.159.27:26858
1:S 06 Aug 2025 10:27:46.558 * MASTER <-> REPLICA sync started
일부를 발췌한 것이기는 하지만, 아니나 다를까 이렇게 온갖 에러 로그가 나왔다.
로그 시간은 UTC+0 기준이라서, 여기에 +9 를 하면 한국시간 기준이 된다.
내가 로그인에 성공한 이후 몇 십분 뒤 팀원이 로그인이 안된다고 했었는데
19시 27분부터 Replica 상태가 되었고, 팀원이 19시 35분부터 로그인이 안된다고 했었기 때문에 정확히 타이밍이 맞는다.
로그를 보면 외부에서 REPLICA OF 명령어를 사용해서 redis 내용을 복제하려고 시도했다고 한다.
(결국 위 블로그 링크와 동일한 상황이었다)
지금 이게 가능한 이유는 이전 ec2 에서는 보안규칙에서 6379 포트를 열고 레디스를 완전 오픈해놨기 때문에 그런 것이었다.
1:S 06 Aug 2025 10:27:42.131 * Connecting to MASTER 109.244.159.27:26858
1:S 06 Aug 2025 10:27:42.131 * MASTER <-> REPLICA sync started
1:S 06 Aug 2025 10:27:42.131 * REPLICAOF 109.244.159.27:26858 enabled (user request from 'id=1571 addr=109.244.159.27:37196 laddr=172.18.0.3:6379 fd=12 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=49 qbuf-free=20425 argv-mem=26 multi-mem=0 rbs=1024 rbp=636 obl=0 oll=0 omem=0 tot-mem=22450 events=r cmd=slaveof user=default redir=-1 resp=3 lib-name= lib-ver=')
1:S 06 Aug 2025 10:27:46.325 # Error condition on socket for SYNC: Connection refused
1:S 06 Aug 2025 10:27:46.558 * Connecting to MASTER 109.244.159.27:26858
1:S 06 Aug 2025 10:27:46.558 * MASTER <-> REPLICA sync started
109.244.159.27 IP 주소에서 접속한 클라이언트가 REPLICAOF 명령어를 실행해서 마스터 노드를 복제하려고 시도했다.
그런데 이를 레디스가 막은 상태이다.
이후로는 계속 연결을 차단하는 로그가 보인다.
이쯤되니 궁금해서 이 IP 주소를 한번 검색해보았다.
에... 중국 텐센트 기반 컴퓨터라고 한다.
문제의 원인을 보니 또 중국이었다..
예.. 복제를 시도하고 있는데 막고 있다고 한다.
이 로그가 무수히 많이 찍혀있다.
이 글을 쓰고 있는 지금도 실시간으로 반복해서 찍히는 중이다.
그래서 해결방법은 어떻게 하면 되는 것이었는가...
Redis host 는 .env 로 관리하고 있었고, redis 는 도커로 띄워서 실행시키고 있던 상황이다.
스프링도 도커로 실행하고, 레디스도 도커로 실행하고 있었기 때문에 각 컨테이너간 통신은 컨테이너 이름으로 할 수 있다.
이렇게 고쳤더니 잘 된다.
과거에 재부팅해서 잘 됐던 이유는 레디스 노드를 다운시키고 새로 만들어서 잘 됐던 것이고,
문제가 주기적으로 재발했던 이유는 외부에서 우연히 얻어걸려서 계속 복제시도를 했기 때문이었던 것이다.
이번에는 재부팅을 해도 여전히 안됐는데, 서버를 완전히 다른 ec2 로 이전하면서, 해당 ec2 에서 실행중인 서버와 레디스는 서로 연결조차 되어 있지 않은 상태였기 때문에 아무리 재부팅해도 이전 서버의 레디스를 재부팅하지 않는 이상 해결되지 않는 것이 당연했다.
이번 ec2 에서는 redis 포트를 열지도 않았고, 도커를 통해 내부적으로만 통신하므로 외부에서 접근할 일도 없다.
이로써 4시간에 걸친 로그인 오류 해결기 끗...
'WEB(BE) > Spring & Spring Boot' 카테고리의 다른 글
JAVA 값 객체의 동등 비교 ('==' 과 equals() 의 차이) (1) | 2025.03.08 |
---|---|
[Swagger] Failed to load API definition (403, 500, NoSuchMethodError) (4) | 2025.01.22 |
[Spring Boot] application.yml 데이터베이스 연결 정보 입력 (0) | 2025.01.06 |
[Spring Boot] profile 개념과 profile 분리 (0) | 2025.01.04 |