2019-09-22

FeedCacheService.IsRepopulationNeeded

I've noticed on my dev farm error 6398 in the application event log which happens only periodically:
The Execute method of job definition Microsoft.Office.Server.UserProfiles.LMTRepopulationJob (ID 59afb507-292a-40d1-97c3-0a038c9bf0e1) threw an exception. More information is included below.

Unexpected exception in FeedCacheService.IsRepopulationNeeded: Cache cluster is down, restart the cache cluster and Retry.

In ULS I found more details about the error:
Unexpected error occurred in method 'GetObject' , usage 'FeedCache' - Exception 'Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.) ---> System.ServiceModel.CommunicationException: The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.IO.IOException: The write operation failed, see inner exception. ---> System.ServiceModel.CommunicationException: The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.ObjectDisposedException: Cannot access a disposed object.  Object name: 'System.Net.Sockets.Socket'.    
 at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, SocketError& errorCode)    
 at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)     -
 -- End of inner exception stack trace ---    
 at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)    
 at System.ServiceModel.Channels.BufferedConnection.WriteNow(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, BufferManager bufferManager)    
 at System.ServiceModel.Channels.BufferedConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)    
 at System.ServiceModel.Channels.ConnectionStream.Write(Byte[] buffer, Int32 offset, Int32 count)    
 at System.Net.Security.NegotiateStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)    
 at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)     -
 -- End of inner exception stack trace ---    
 at System.Net.Security.NegotiateStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)    
 at System.Net.Security.NegotiateStream.Write(Byte[] buffer, Int32 offset, Int32 count)    
 at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)     -
 -- End of inner exception stack trace ---    
 at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)    
 at System.ServiceModel.Channels.StreamConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout, BufferManager bufferManager)    
 at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSendCore(Message message, TimeSpan timeout)    
 at System.ServiceModel.Channels.TransportDuplexSessionChannel.OnSend(Message message, TimeSpan timeout)    
 at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)    
 at Microsoft.ApplicationServer.Caching.CacheResolverChannel.Send(Message message, TimeSpan timeout)    
 at Microsoft.ApplicationServer.Caching.WcfClientChannel.SendOnChannel(EndpointID endpoint, TimeSpan& timeout, WaitCallback callback, Object state, Boolean async, IDuplexSessionChannel channel, Message message)     -
 -- End of inner exception stack trace ---    
 at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody, RequestBody reqBody)    
 at Microsoft.ApplicationServer.Caching.DataCache.InternalGet(String key, DataCacheItemVersion& version, String region, IMonitoringListener listener)    
 at Microsoft.ApplicationServer.Caching.DataCache.<>c__DisplayClass51.<Get>b__50()    

 at Microsoft.Office.Server.DistributedCaching.SPDistributedCache.GetObject(String key, String regionName)'.

As per the post on Premier Field Engineering Developer Blog, this is a generic error. In order to get additional details, logging should be enabled on Distributed cache service's config file (for me it was C:\Program Files\AppFabric 1.1 for Windows Server\DistributedCacheService.exe.config):
<system.diagnostics>
    <sources>
      <source name="System.ServiceModel"
              switchValue="Information, ActivityTracing"
              propagateActivity="true">
        <listeners>
          <add name="traceListener"
              type="System.Diagnostics.XmlWriterTraceListener"
              initializeData= "c:\Temp\DistributedCacheService.svclog" />
        </listeners>
      </source>
    </sources>
    <trace autoflush="true" />
  </system.diagnostics>

I restarted the Distributed cache service and now I am waiting for another instance of the error to happen so I can get the details of the actual exception.

EDIT: Nothing useful found in trace log. I've enabled verbose log for SharePoint Server > Distributed Cache.

EDIT2: I've enabled admin and debug event logs for distributed cache service.

2 comments:

  1. Hi,
    have the exact same errors.
    Did you solve this problem?

    ReplyDelete
    Replies
    1. No, unfortunately I didn't have the time to finish it.

      Delete